Posts
Comments
At least some of the arguments offered by Richard Rorty in Philosophy and the Mirror of Nature are great. Understanding the arguments takes time because they are specific criticisms of a long tradition of philosophy. A neophyte might respond to his arguments by saying "Well, the position he's attacking sounds ridiculous anyway, so I don't see why I should care about his criticisms." To really appreciate and understand the argument, the reader needs to have sense of why prior philosophers were driven to these seemingly ridiculous positions in the first place, and how their commitment to those positions stems from commitment to other very common-sensical positions (like the correspondence theory of truth). Only then can you appreciate how Rorty's arguments are really an attack on those common-sensical positions rather than some outre philosophical ideas.
Perhaps explicitly thinking of them as systems of equations (or transformations on a vector) would be helpful.
As an example, suppose you are asked to multiply matrices A and B, where A is [1 2, 0 4, -1 2] (the commas represent the end of a row) and B is [2 1 0, 3 1 2]. Start out by taking the rightmost matrix (B in this case) and converting it into a series of equations, one for each row. So since the first row is 2 1 0, the relevant equation will be 2x + 1y + 0z. Assign each of these equations to some other variable. So we now have
X = 2x + y
Y = 3x + y + 2z
Now do the same thing with the matrix on the left, except this time use the new variables you've introduced (X and Y), so the three equations you end up with (one for each row) will be
X + 2Y
4Y
-X + 2Y
Now that you have these formulae, substitute in the values of X and Y based on your earlier equations. You get
(2x + y) + 2(3x + y + 2z)
4(3x + y + 2z)
-(2x + y) + 2(3x + y + 2z)
Simplifying, you get
8x + 3y + 4z
12x + 4y + 8z
4x + y + 4z
The coefficients of these equations are the result of the multiplication. So the product of the two matrices is [8 3 4, 12 4 8, 4 1 4].
I'll admit this is not the quickest way to go about multiplying matrices, but it might be easier for you to remember since it doesn't seem as arbitrary. And maybe once you get used to thinking about multiplication this way, the usual visual rule will start making more sense to you.
I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument.
I don't think that's true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.
If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don't get to the point of simulating minds or they choose not to run a significant number of simulations.
If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.
This is why, when Bostrom describes the Simulation Argument, he focuses on "ancestor-simulations". In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).
So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators' ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.
You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.
Right. When I say "his conclusion is still true", I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not "we are living in a simulation".
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom's conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that's all you're claiming, then you're not disagreeing with the simulation argument.
First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.
Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It's worth noting that these two are claims about our universe, not about some parent universe.
In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom's reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn't apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.
There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom's response mathematically precise would be a good way to track down the flaw (if any).
"Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all?"
-- David Chalmers
These questions may be a product of conceptual confusion, but they don't seem that way to me. Perhaps I am confused in the same way.
When you update, you're not simply imagining what you would believe in a world where E was true, you're changing your actual beliefs about this world. The point of updates is to change your behavior in response to evidence. I'm not going to change my behavior in this world simply because I'm imagining what I would believe in a hypothetical world where E is definitely true. I'm going to change my behavior because observation has led me to change the credence I attach to E being true in this world.
Updating by Bayesian conditionalization does assume that you are treating E as if its probability is now 1. If you want an update rule that is consistent with maintaining uncertainty about E, one proposal is Jeffrey conditionalization. If P1 is your initial (pre-evidential) distribution, and P2 is the updated distribution, then Jeffrey conditionalization says:
P2(H) = P1(H | E) P2(E) + P1(H | ~E) P2(~E).
Obviously, this reduces to Bayesian conditionalization when P2(E) = 1.
Credit and accountability seem like good things to me, and so I want to live in a world where people/groups receive credit for good qualities, and are held accountable for bad qualities.
If this is your concern, then you should take into account what sorts of groups are appropriate loci for credit and accountability. This will, of course, depend on what you think is the point of credit/accountability.
If you believe, as I do, that the function of credit and accountability is to influence future behavior, then it seems that the appropriate loci of credit/accountability should be "agential". In other words, objects of credit and blame should be capable of something resembling goal-directed alteration of behavior. Individual people are appropriate loci on this account, since they are (at least, mostly) paradigmatic agents.
Some groups might also qualify as agential, and thus as appropriate loci of credit and blame. Corporations come to mind, as do nations. But that is because those groups have a particular organizational structure that makes them somewhat agent-like. Not every group has this quality. The group of all left-handed people, for instance, is not agent-like in any relevant sense, so I don't see the point of assigning credit or blame to it. Similarly for racial groups or genders.
It seems to me that your objection here is driven mainly by a general dislike of Gleb's contributions (and perhaps his presence on LW), rather than a sincere conviction about the importance of your point. I mean, this is a ridiculous nitpick, and the hostility of your call-out is completely disproportionate to the severity of Gleb's supposed infraction.
While Gleb's aside might be a "lie" by some technical definition, it certainly doesn't match the usual connotations of the term. I see virtually zero harm in the kind of "lie" you're focusing on here, so I'm not sure about the value of your piece of advice, other than signalling your aversion towards Gleb.
I absolutely agree that Kant's system as represented in the Groundwork is unworkable. But you could say the same about pretty much any pre-20th-century philosopher's major work. I think the fact that someone was even trying to think about ethics along essentially game-theoretic lines in the 18th century is pretty revolutionary and worthy of respect, even if he did get important things wrong. As far as I'm aware, no one else was even in the ballpark.
ETA: I do think a lot of philosophers scoff (correctly) at Kant's object-level moral views, not only because of their absurdity (the horrified tone in which he describes masturbation still makes me chuckle) but because of the intellectual contortions he would go through to "prove" them using his system. While I believe he made very important contributions to meta-ethics, his framework was nowhere near precise enough to generate a workable applied ethics. So yeah, Kant's actual ethical positions are pretty scoff-worthy, but the insight driving his moral framework is not.
Isn't that motte/bailey
Not sure it's motte-and-bailey. I do think there are several serious pathologies in large swathes of contemporary philosophy. And I say this not as a dilettante, but a professional philosopher. There are areas of philosophy where these pathological tendencies are being successfully held at bay, and I do think there are promising signs that those areas are growing in influence. But much of mainstream philosophy, especially mainstream metaphysics and epistemology, does suffer from continued adherence to what I consider archaic and unhelpful methodology. And I think that's what Luke is trying to point out. He does go overboard with his rhetoric, and I think he lacks a feel for the genuine insights of the Western philosophical tradition (as smart and insightful as I think Yudkowsky is, I really find it odd that someone who purports to be reasonably familar with philosophy would cite him as their favorite philosopher). But I think there is a sound point lurking under there, and not merely a banal "motte"-style point.
I was only pointing out in response to the OP that I have been harping on LW's silly anti-academic sentiment for ages, that's all.
I absolutely agree with you on the silliness of the anti-academic sentiment.
I can help with the second request:
I think Luke will agree with you on what you say here, though. I remember commenting on one of his posts that was critical of philosophy, saying that his arguments didn't really apply to the area of philosophy I'm involved in (technical philosophy of science). Luke's response was essentially, "I agree. I'm not talking about philosophy of science." I think he'd probably say the same about philosophical work on decision theory and causal inference.
I don't know of many analytic philosophers who scoff at his ethics, although there are certainly many who disagree with it. There are also many analytic philosophers who consider his ethics to be a significant advance in moral reasoning. As an example, Derek Parfit, in his recent book, constructs an ethical system that tries to reconcile the attractions of both consequentialism and Kantian deontological ethics.
Kant's discussion of the categorical imperative, especially the first formulation of the imperative (act according to the maxim that you would will to be a universal law), prefigures various contemporary attempts to reformulate decision theory in order to avoid mutual defection in PD-like games, including Hofstadter's notion of superrationality and Yudkowsky's Timeless Decision Theory. Essentially, Kantian ethics is based on the idea that ethics stems from nothing more than a commitment to rational decision-making and common knowledge of rationality among interacting agents (although with Kant it's not so much about knowing that other agents are rational but about respecting them by treating them as rational). I don't fully agree with this perspective, but I do think it is remarkably astute and ahead of its time.
Kant is talking about good and evil, delight, happiness, character, honor, etc., etc, while Russell is talking about looking at triangles. Which one are people going to want to read?
Except Kant also talked quite a bit about triangles and Russell also talked quite a bit about good and evil. And Kant discussed perceptual epistemology a whole lot more than Russell did. The Critique of Pure Reason, Kant's most significant work, is about epistemology, not ethics.
Also, while much of twentieth-century continental philosophy does build on Kant (although a lot of it is a reaction against Kant), so does much of twentieth-century analytic philosophy. In many ways, the true heirs of Kant in the twentieth century were the logical positivists. Their epistemology was closer to Kant's than any prominent continental philosopher's was. So Kant has just as much claim to being a founder of analytic philosophy as he does to being a founder of continental philosophy.
Kant was not the most lucid writer, but his style was not remotely "analogical" or "literary" (look through Kant's famous Transcendental Deduction and see whether those descriptors seem apt).. And much of Kantian philosophy is precisely formulated and subject to falsification. In fact, quite a bit of it has been falsified (his contention that space is necessarily Euclidean, for instance).
Relativists have no non-subjective notion of "normativity", thus the subjective/normative distinction makes no sense to them.
This is not true of all relativists. There are relativists who believe in entirely objective agent-relative moral facts. In other words, they would say something like, "It is an objective moral truth that X is wrong for members of community Y". The normative force of "X is wrong" would apply even to members of community Y who don't believe that X is wrong (hence the objectivity), but it wouldn't apply to people outside community Y (hence the relativism).
Yeah, but what does Genghis Khan's dad say? He is, remarkably, suspected of being a direct ancestor of even more living Asians than Genghis!
Mordecai Kaplan would be unhappy to hear that commitment to ritual and tradition requires belief
I think the issue is not whether commitment to ritual -- as in, a commitment to go through the motions -- requires belief, it's whether experiencing ritual as beautiful requires belief. I think it's plausible that immersing oneself in the context of the ritual, including the requisite belief set, makes it far more meaningful and awe-inspiring. Merely aesthetic appreciation of ritual may not inspire the same depth of feeling as you would experience if every move in the ritual were wrought with spiritual significance for you.
So participating in the tradition without believing may also count as "depriving oneself of beauty". I wouldn't really know, though. I've been a non-believer my entire intellectually aware life, so I have no basis for comparison. I will say that I can't imagine any ritual or tradition driving me into the kind of frenzy you see at some charismatic Pentecostal churches, for instance. But I can't really imagine being driven to the kind of frenzy you see in the average audience for the The Price is Right either, so this may be an issue of personality rather than belief.
Boltzmann's original combinatorial argument already presumed a discretization of phase space, derived from a discretization of single-molecule phase space, so we don't need to incorporate quantum considerations to "fix" it. The combinatorics relies on dividing single-particle state space into tiny discrete boxes, then looking at the number of different ways in which particles could be distributed among those boxes, and observing that there are more ways for the particles to be spread out evenly among the boxes than for them to be clustered. Without discretization the entire argument collapses, since no more than one particle would be able to occupy any particular "box", so clustering would be impossible.
So Boltzmann did successfully discretize a box full of particles with arbitrary position and momentum, and using his discretization he derived (discrete approximations of) the Maxwell-Boltzmann distribution and the Boltzmann formula for entropy. And he did all this without invoking (or, indeed, being aware of) quantum considerations. So the Sackur-Tetrode route is not a requirement for a discretized Boltzmann-esque argument. I guess you could argue that in the absence of quantum considerations there is no way to justify the discretization, but I don't see why not. The discretization need not be interpreted as ontological, emerging from the Uncertainty Principle. It could be interpreted as merely epistemological, a reflection of limits to our abilities of observation and intervention.
Incidentally, none of these derivations require the assumption of ergodicity in the system. The result that the size of a macrostate in phase space is proportional to the number of microstates emerges purely from the combinatorics, with no assumptions about the system's dynamics (other than that they are Hamiltonian). Ergodicity, or something like it, is only required to establish that the time spent by a system in a particular macrostate is proportional to the size of the macrostate, and that is used to justify probabilistic claims about the system, such as the claim that a closed system observed at an arbitrary time is overwhelmingly likely to be in the macrostate of maximum Boltzmann entropy.
So ultimately, I do think the point Carroll was making is valid. The Boltzmann entropy -- as in, the actual original quantity defined by Boltzmann and refined by the Ehrenfests, not the modified interpretation proposed by people like Jaynes -- is distinct from the Gibbs entropy. The former can increase (or decrease) in closed system, the latter cannot.
To put it slightly more technically, the Gibbs entropy, being a property of a distribution that evolves according to Hamiltonian laws, is bound to stay constant by Liouville's theorem, unless there is a geometrical change in the accessible phase space or we apply some coarse-graining procedure. Boltzmann entropy, being a property of macrostates, not of distributions, is not bound by Liouville's theorem. Even if you interpret the Boltzmann entropy as a property of a distribution, it is not a distribution that evolves in a Hamiltonian manner. It evolves discontinuously when the system moves from one Boltzmann macrostate to the next.
How is this:
What we may think are fundamental laws of our universe, are merely descriptions of the nature of possible futures consistent with our continued existence.
compatible with this:
Everett Many Worlds is either correct or at least on the right track
Is quantum mechanics an exception to the claim that our conception of the fundamental laws is based on an observation selection effect? Why would it be one?
I think you're ignoring the difference between the Boltzmann and Gibbs entropy, both here and in your original comment. This is going to be long, so I apologize in advance.
Gibbs entropy is a property of ensembles, so it doesn't change when there is a spontaneous fluctuation towards order of the type you describe. As long as the gross constraints on the system remain the same, the ensemble remains the same, so the Gibbs entropy doesn't change. And it is the Gibbs entropy that is most straightforwardly associated with the Shannon entropy. If you interpret the ensemble as a probability distribution over phase space, then the Gibbs entropy of the ensemble is just the Shannon entropy of the distribution (ignoring some irrelevant and anachronistic constant factors). Everything you've said in your comments is perfectly correct, if we're talking about Gibbs entropy.
Boltzmann entropy, on the other hand, is a property of regions of phase space, not of ensembles or distributions. The famous Boltzmann formula equates entropy with the logarithm of the volume of a region in phase space. Now, it's true that corresponding to every phase space region there is an ensemble/distribution whose Shannon entropy is identical to the Boltzmann entropy, namely the distribution that is uniform in that region and zero elsewhere. But the converse isn't true. If you're given a generic ensemble or distribution over phase space and also some partition of phase space into regions, it need not be the case that the Shannon entropy of the distribution is identical to the Boltzmann entropy of any of the regions.
So I don't think it's accurate to say that Boltzmann and Shannon entropy are the same concept. Gibbs and Shannon entropy are the same, yes, but Boltzmann entropy is a less general concept. Even if you interpret Boltzmann entropy as a property of distributions, it is only identical to the Shannon entropy for a subset of possible distributions, those that are uniform in some region and zero elsewhere.
As for the question of whether Boltzmann entropy can decrease spontaneously in a closed system -- it really depends on how you partition phase space into Boltzmann macro-states (which are just regions of phase space, as opposed to Gibbs macro-states, which are ensembles). If you define the regions in terms of the gross experimental constraints on the system (e.g. the volume of the container, the external pressure, the external energy function, etc.), then it will indeed be true that the Boltzmann entropy can't change without some change in the experimental constraints. Trivially true, in fact. As long as the constraints remain constant, the system remains within the same Boltzmann macro-state, and so the Boltzmann entropy must remain the same.
However, this wasn't how Boltzmann himself envisioned the partitioning of phase space. In his original "counting argument" he partitioned phase space into regions based on the collective properties of the particles themselves, not the external constraints. So from his point of view, the particles all being scrunched up in one corner of the container is not the same macro-state as the particles being uniformly spread throughout the container. It is a macro-state (region) of smaller volume, and therefore of lower Boltzmann entropy. So if you partition phase space in this manner, the entropy of a closed system can decrease spontaneously. It's just enormously unlikely. It's worth noting that subsequent work in the Boltzmannian tradition, ranging from the Ehrenfests to Penrose, has more or less adopted Boltzmann's method of delineating macrostates in terms of the collective properties of the particles, rather than the external constraints on the system.
Boltzmann's manner of talking about entropy and macro-states seems necessary if you want to talk about the entropy of the universe as a whole increasing, which is something Carroll definitely wants to talk about. The increase in the entropy of the universe is a consequence of spontaneous changes in the configuration of its constituent particles, not a consequence of changing external constraints (unless you count the expansion of the universe, but that is not enough to fully account for the change in entropy on Carroll's view).
Yeah, I think the initial exclusivity of Facebook really helped. I went to a school near Harvard at the time Facebook launched, and we were all vaguely aware of the site when it first launched as a Harvard-only site. It then expanded to include our school and a few others -- maybe ten or so, all quite prestigious -- and there was widespread adoption almost instantaneously on our campus. I think the sense of being invited to join an exclusive club had a lot to do with that. I don't know if Zuckerberg intended it, but playing on the elitism of college students was a very effective strategy to achieve rapid adoption at the early stage, and of course once that was achieved, there was enough momentum to ensure success once the site steadily opened up to larger and larger populations.
From a decision-theory perspective, I should essentially just ignore the possibility that I'm in the first 100 rooms - right?
Well, what do you mean by "essentially ignore"? If you're asking if I should assign substantial credence to the possibility, then yeah, I'd agree. If you're asking whether I should assign literally zero credence to the possibility, so that there are no possible odds -- no matter how ridiculously skewed -- I would accept to bet that I am in one of those rooms... well, now I'm no longer sure. I don't exactly know how to go about setting my credences in the world you describe, but I'm pretty sure assigning 0 probability to every single room isn't it.
Consider this: Let's say you're born in this universe. A short while after you're born, you discover a note in your room saying, "This is room number 37". Do you believe you should update your belief set to favor the hypothesis that you're in room 37 over any other number? If you do, it implies that your prior for the belief that you're in one of the first 100 rooms could not have been 0.
(But. on the other hand, if you think you should update in favor of being in room x when you encounter a note saying "You are in room x", no matter what the value of x, then you aren't probabilistically coherent. So ultimately, I don't think intuition-mongering is very helpful in these exotic scenarios. Consider my room 37 example as an attempt to deconstruct your initial intuition, rather than as an attempt to replace it with some other intuition.)
Theoretically there are as many multiples of 10 as not (both being equinumerous to the integers), but if we define rationality as the "art of winning", then shouldn't I guess "not in a multiple of 10"?
Perhaps, but reproducing this result doesn't require that we consider every room equally likely. For instance, a distribution that attaches a probability of 2^(-n) to being in room n will also tell you to guess that you're not in a multiple of 10. And it has the added advantage of being a possible distribution. It has the apparent disadvantage of arbitrarily privileging smaller numbered rooms, but in the kind of situation you describe, some such arbitrary privileging is unavoidable if you want your beliefs to respect the Kolmogorov axioms.
When I say the probability distribution doesn't exist, I'm not talking about the possibility of the world you described. I'm talking about the coherence of the belief state you described. When you say "The probability of you being in the first 100 rooms is 0", it's a claim about a belief state, not about the mind-independent world. The world just has a bunch of rooms with people in them. A probability distribution isn't an additional piece of ontological furniture.
If you buy the Cox/Jaynes argument that your beliefs must adhere to the probability calculus to be rationally coherent, then assigning probability 0 to being in any particular room is not a coherent set of beliefs. I wouldn't say this is a case of probability theory not being "expressive enough". Maybe you want to argue that the particular belief state you described ("Being in any room is equally likely") is clearly rational, in which case you would be rejecting the idea that adherence to the Kolmogorov axioms is a criterion for rationality. But do you think it is clearly rational? On what grounds?
(Incidentally, I actually do think there are issues with the LW orthodoxy that probability theory limns rationality, but that's a discussion for another day.)
There is no such thing as a uniform probability distribution over a countably infinite event space (see Toggle's comment). The distribution you're assuming in your example doesn't exist.
Maybe a better example for your purposes would be picking a random real number between 0 and 1 (this does correspond to a possible distribution, assuming the axiom of choice is true). The probability of the number being rational is 0, the probability of it being greater than 2 is also 0, yet the latter seems "more impossible" than the former.
Of course, this assumes that "probability 0" entails "impossible". I don't think it does. The probability of picking a rational number may be 0, but it doesn't seem impossible. And then there's the issue of whether the experiment itself is possible. You certainly couldn't construct an algorithm to perform it.
Do you think that this is what utilitarianism is, or ought to be?
Utilitarianism does offer the possibility of a precise, algorithmic approach to morality, but we don't have anything close to that as of now. People disagree about what "utility" is, how it should be measured, and how it should be aggregated. And of course, even if they did agree, actually performing the calculation in most realistic cases would require powers of prediction and computation well beyond our abilities.
The reason I used the phrase "artificially created", though, is that I think any attempt at systematization, utilitarianism included, will end up doing considerable violence to our moral intuitions. Our moral sensibilities are the product of a pretty hodge-podge process of evolution and cultural assimilation, so I don't think there's any reason to expect them to be neatly systematizable. One response is that the benefits of having a system (such as bias mitigation) are strong enough to justify biting the bullet, but I'm not sure that's the right way to think about morality, especially if you're a moral realist. In science, it might often be worthwhile using a simplified model even though you know there is a cost in terms of accuracy. In moral reasoning, though, it seems weird to say "I know this model doesn't always correctly distinguish right from wrong, but its simplicity and precision outweigh that cost".
So, do you think that, absent a formal algorithm, when presented with a "save" formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the "die" formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases?
Something like this might be useful, but I'm not at all confident it would work. Sounds like another research project for the Harvard Moral Psychology Research Lab. I'm not aware of any moral philosopher proposing something along these lines, but I'm not extremely familiar with that literature. I do philosophy of science, not moral philosophy.
Realize what's occurring here, though. It's not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What's actually happening is that when philosophers are presented with the "save" formulation (but not the "die" formulation) they react differently than when they are presented with the "die" formulation (but not the "save" formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I'm perfectly aware of the error, I know I wouldn't give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the "save" formulation, think to myself "What would I say in the 'die' formulation?" before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the "die" formulation in the first place.
This assumption comes from expecting an expert to know the basics of their field.
I wouldn't characterize the failure in this case as reflecting a lack of knowledge. What you have here is evidence that philosophers are just as prone to bias as non-philosophers at a similar educational level, even when the tests for bias involve examples they're familiar with. In what sense is this a failure to "know the basics of their field"?
A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.
A relevantly similar test might involve checking whether physicists are just as prone as non-physicists to, say, the anchoring effect, when asked to estimate (without explicit calculation) some physical quantity. I'm not so sure that a trained physicist would be any less susceptible to the effect, although they might be better in general at estimating the quantity.
Take, for instance, evidence showing that medical doctors are just as susceptible to framing effects in medical treatment contexts as non-specialists. Does that indicate that doctors lack knowledge about the basics of their fields?
I think what this study suggests is that philosophical training is no more effective at de-biasing humans (at least for these particular biases) than a non-philosophical education. People have made claims to the contrary, and this is a useful corrective to that. The study doesn't show that philosophers are unaware of the basics of their field, or that philosophical training has nothing to offer in terms of expertise or problem-solving.
I thought the defining feature of being a p-zombie was acting as if they had consciousness while not "actually" having it
It's more than just a matter of behavior. P-zombies are supposed to be physically indistinguishable from human beings in every respect while still lacking consciousness.
I teach about 8000 miles away from upstate New York, I'm afraid.
It really depends on what topic you're interested in. Papers tend to be pretty focused on one question, so if you're looking for an overview of a subject, books are the way to go. If you're interested in learning more about some specific problem, I'd be happy to recommend accessible papers if I can think of any.
"Excessive" was probably a poorly chosen word. I meant that the books I listed are the ones that provide the deepest insight into the theories (out of all the books I have seen) within the constraints specified by iarwain (presuming nothing more than high school mathematics). Some of the books teach some slightly more advanced math along the way, because yeah, it's hard to really comprehend much of GR without at least a basic conception of differential geometry, or understand QM without some idea of linear algebra, but none of the books inundates you with math like The Road to Reality does.
I'm assuming you already have some absolutely basic knowledge of the major physical theories, at the level of Brian Greene's The Fabric of the Cosmos (which was recommended in another comment). The books I'll recommend take you deeper into the theories (emphasizing philosophical implications) without excessive mathematics. If you don't have knowledge at this level, read Greene's book first. Some of the books I'm suggesting aren't entirely up to date, but none of them are obsolete. I'm not aware of any more recent books that cover the same material with the same quality. I teach philosophy of physics to non-physics majors, and these are usually among the books I assign (supplemented with recent papers, lecture notes, etc.).
Space-Time: Geroch, General Relativity from A to B
Quantum Mechanics: Albert, Quantum Mechanics and Experience
Statistical Mechanics: Ben-Naim, Entropy and the Second Law: Interpretation and Misss-Interpretations (Supplement with Albert's Time and Chance if you want to go deeper into the "Arrow of Time" issue)
Quantum Field Theory and the Standard Model: Oerter, The Theory of Almost Everything (A pretty superficial book compared to the others on this list, I admit, but I'm not aware of any philosophically deep treatment of QFT that doesn't presume considerable math knowledge. You could also try Feynman's QED, which is excellent but very out-dated.)
Cosmology: Tegmark, Our Mathematical Universe (Good basic overview of cosmology, but the philosophical speculation doesn't meet your third requirement. Try Unger and Smolin's The Singular Universe and the Reality of Time for a counterpoint.)
That's correct, but it is difficult enough to effectively not be self-contained, I think. Being able to apprehend the concepts at the pace and brevity at which Penrose introduces them would require significant prior training in thinking mathematically, or a quite unusually agile mind.
Yeah, that was my assumption as well.
What exactly is that 70% supposed to quantify? Is the claim that, if I wake up tomorrow and no longer find my girlfriend physically attractive, I'll only be 30% as in love with her as I am now? Or is the claim that in 70% of heterosexual romantic relationships, if the male no longer finds the female attractive, he will no longer be in love with her?
Also, why do you consider this a "really good idea"?
Judgements of existence are model-relative. I believe electrons exist because I have an excellent (highly useful) model that involves ontological commitment to electrons. Same for other minds. Same for my own mind, for that matter.
I don't first determine what exists and only then build models of those things. My models tell me what exists.
I reject the correspondence theory of truth (at least what philosopher's call the "correspondence theory", which I think has certain important differences from the view Eliezer subscribes to).
I started out writing a description of my views in a comment, but it ended up being way too long, so I made it a separate post. Here it is.
I use Firefox, and the graphs aren't blurry at all.
The prevailing point of view among non-religious scientists (as well as here) is that mental processes (the mind) are reducible to the physical processes in the brain. This part is rather uncontroversial, even Searle agrees with it. Out of the alternatives described on Wikipedia Emergent Materialism is probably the closest to the mainstream thought here:
Emergent materialism explicitly denies that mental properties are reducible to physical processes, so I don't think it's closest to mainstream thought here. Emergence is often used in philosophy as an alternative to reduction. Or did you just mean the closest out of all the versions of property dualism?
I suspect the view in the philosophical taxonomy closest to the LW mainstream is functionalism.
Recipes without real justification.
This sounds more like a pedagogical issue than an inherent problem with classical statistics. I agree that Bayesianism is philosophically "cleaner", but classical statistics isn't just a hodge-podge of unjustified tools. If you're interested in a sophisticated justification of classical methods, this is a good place to start. I'm pretty sure you'll be unconvinced, but it should at least give you some idea of where frequentists are coming from.
To get a more introductory -- but still quite thorough, and more modern -- Bayesian perspective, I recommend John Kruschke's Doing Bayesian Data Analysis. Ignore the silly cover. The book is engagingly written and informative. As a side benefit, it will also teach you R, a very useful language for statistical computing. Definitely worth learning if you are at all interested in data analysis.
Also, you should learn some classical statistics before getting into Bayesian statistics. Jaynes won't really help with that. Kruschke will help a little, but not much. The freely available OpenIntro Statistics textbook is a very good introduction.
I recommend first reading OpenIntro, then Kruschke, then Jaynes.
I suspect this describes the wife's cryopreservation:
I doubt it. The subject of that document was 46 when she died. Chay's wife was 52, according to news reports.
I suspect the O Administration won't make a big deal out of it because Chay's case involves a relatively small amount of money as financial malfeasance goes, and it lacks a racial angle to exploit.
I'm not as opposed to political discussion on this site as many are, but I do think the original point of EY's "Politics is the Mindkiller" post is worth keeping in mind. Inserting this kind of mind-killing aside in an otherwise non-political comment is needlessly inflammatory and distracting. I don't want to see this sort of thing on LW.
Fair enough. Just wanted to let you know that although my comment might have sounded judgmental it genuinely wasn't intended that way. From my perspective, all three reasons for inaction I listed are perfectly legitimate and not deserving of criticism. I'm still not sure whether any concrete action is necessary, although at this point I am virtually certain that it is a Eugine sock puppet.
I already contacted Viliam_Bur with this suspicion a few months ago, and I doubt I'm the first. I'm assuming Viliam either doesn't feel he has sufficient evidence to act, doesn't feel that any action is warranted, or is too busy to follow up on this at the moment.
Yvain's comment below is a new piece of (fairly conclusive) evidence. Maybe that will impact the situation if Viliam felt he had insufficient evidence previously, so it might be worth drawing his attention to this thread.
In the unlikely event that the net positive votes (at that time) given to Azathoth123 reflect the actual attitudes of the lesswrong community the 'public' should be made aware so they can choose whether to continue to associate with the site.
Yes, but wouldn't this be more effective if you first confirmed/disconfirmed your hypothesis about the votes through a mod? In the absence of that information, how is a member of the public to know how to act? My objection was more about the speculative nature of the comment rather than the fact that you're "speaking out". I have nothing against speculation per se, but in cases where it can be fairly easily verified I prefer to see that happen instead.
I don't see what argument you can possible make for why say transsexuality shouldn't be considered a psychiatric disorder but being an "other kin" should.
How about the fact that everything we know about ontogeny suggests that gender of a child of human parents should be more fluid than its species, since the determination and development of gender-typical physiology in utero is complex and multivocal? There are ontogenetic factors (insufficient uptake of testosterone, for instance) that might lead to a child with male-typical sexual organs but more female-typical neurological features. There aren't any analogously complex species-determining processes involved in the development of a child.
I assign a very high probability (>90%) to Azathoth123 being Eugine_Nier. Given the latter's history, I wouldn't be surprised if Azathoth were involved in voting shenanigans. But I think it would be better if you take this to a mod (Viliam_Bur, I believe) for confirmation/action, rather than speculating in public.
ETA: Just realized that this comment is doing exactly what it was advising against. Slightly embarrassed that I didn't notice while I was writing it.
Jaynes was aware of MWI. Jaynes and Everett corresponded with one another, and Jaynes read a short version of Everett's Ph.D. dissertation (in which MWI was first proposed and defended) and wrote a letter commenting on it. You can read the letter here. He seems to have been very impressed by the theory, describing it as "the logical completion of quantum mechanics, in exactly the same sense that relativity was the logical completion of classical theory". Not entirely sure what he meant by that.