What Are Probabilities, Anyway?
post by Wei Dai (Wei_Dai) · 2009-12-11T00:25:33.177Z · LW · GW · Legacy · 89 commentsContents
89 comments
In Probability Space & Aumann Agreement, I wrote that probabilities can be thought of as weights that we assign to possible world-histories. But what are these weights supposed to mean? Here I’ll give a few interpretations that I've considered and held at one point or another, and their problems. (Note that in the previous post, I implicitly used the first interpretation in the following list, since that seems to be the mainstream view.)
- Only one possible world is real, and probabilities represent beliefs about which one is real.
- Which world gets to be real seems arbitrary.
- Most possible worlds are lifeless, so we’d have to be really lucky to be alive.
- We have no information about the process that determines which world gets to be real, so how can we decide what the probability mass function p should be?
- All possible worlds are real, and probabilities represent beliefs about which one I’m in.
- Before I’ve observed anything, there seems to be no reason to believe that I’m more likely to be in one world than another, but we can’t let all their weights be equal.
- Not all possible worlds are equally real, and probabilities represent “how real” each world is. (This is also sometimes called the “measure” or “reality fluid” view.)
- Which worlds get to be “more real” seems arbitrary.
- Before we observe anything, we don't have any information about the process that determines the amount of “reality fluid” in each world, so how can we decide what the probability mass function p should be?
- All possible worlds are real, and probabilities represent how much I care about each world. (To make sense of this, recall that these probabilities are ultimately multiplied with utilities to form expected utilities in standard decision theories.)
- Which worlds I care more or less about seems arbitrary. But perhaps this is less of a problem because I’m “allowed” to have arbitrary values.
- Or, from another perspective, this drops another another hard problem on top of the pile of problems called “values”, where it may never be solved.
As you can see, I think the main problem with all of these interpretations is arbitrariness. The unconditioned probability mass function is supposed to represent my beliefs before I have observed anything in the world, so it must represent a state of total ignorance. But there seems to be no way to specify such a function without introducing some information, which anyone could infer by looking at the function.
For example, suppose we use a universal distribution, where we believe that the world-history is the output of a universal Turing machine given a uniformly random input tape. But then the distribution contains the information of which UTM we used. Where did that information come from?
One could argue that we do have some information even before we observe anything, because we're products of evolution, which would have built some useful information into our genes. But to the extent that we can trust the prior specified by our genes, it must be that evolution approximates a Bayesian updating process, and our prior distribution approximates the posterior distribution of such a process. The "prior of evolution" still has to represent a state of total ignorance.
These considerations lead me to lean toward the last interpretation, which is the most tolerant of arbitrariness. This interpretation also fits well with the idea that expected utility maximization with Bayesian updating is just an approximation of UDT that works in most situations. I and others have already motivated UDT by considering situations where Bayesian updating doesn't work, but it seems to me that even if we set those aside, there is still reason to consider a UDT-like interpretation of probability where the weights on possible worlds represent how much we care about those worlds.
89 comments
Comments sorted by top scores.
comment by Johnicholas · 2009-12-11T17:28:31.635Z · LW(p) · GW(p)
In order answer questions like "What are X, anyway?", we can (phenomenologically) turn the question into something like "What can we do with X?" or "What consequences does X have?"
For example, consider the question "What are ordered pairs, anyway?". Sometimes you see "definitions" of ordered pairs in terms of set theory. Wikipedia says that the standard definition of ordered pairs is:
(a, b) := {{a}, {a, b}}
Many mathematicians find this "definition" unsatisfactory, and view it not as a definition, but an encoding or translation. The category-theoretic notion of a product might be more satisfactory. It pins down the properties that the ordered pair already had before the "definition" was proposed and in what sense ANY construction with those properties could be used. Lambda calculus has a couple constructions that look superficially quite different from the set-theory ones, but satisfy the category-theoretic requirements.
I guess this is a response at the meta level, recommending this sort of "phenomenological" lens as the way to resolve these sort of questions.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-11T17:40:24.091Z · LW(p) · GW(p)
Lambda calculus has a couple constructions that look superficially quite different from the set-theory ones, but satisfy the category-theoretic requirements.
... as does the set-theoretic one.
ETA: Now that I read more closely, you didn't imply otherwise.
comment by MichaelVassar · 2009-12-11T11:53:51.286Z · LW(p) · GW(p)
This word "possible" carries a LOT of hidden baggage. If math tells us anything its that LOTS of things SEEM possible to us because we aren't logically omniscient but aren't really possible.
While we're at it, how about we drop "worlds" from the mix. I don't think it adds anything. If we replace it with "information flows" do things work better?
Replies from: Peter_de_Blanc, Wei_Dai, timtyler↑ comment by Peter_de_Blanc · 2009-12-11T20:11:35.026Z · LW(p) · GW(p)
Do you mean something precise by "information flows"?
↑ comment by Wei Dai (Wei_Dai) · 2009-12-12T12:47:22.043Z · LW(p) · GW(p)
Possible world is a standard term in several related fields, such as philosophy and linguistics. Are you arguing against my particular usage, or all usage of the term in general?
comment by Douglas_Knight · 2009-12-11T03:31:36.839Z · LW(p) · GW(p)
Lumping probabilities in with utilities sounds pretty close to Vladimir Nesov's Representing Preference by Probability Measures.
comment by Wei Dai (Wei_Dai) · 2020-02-13T07:30:43.181Z · LW(p) · GW(p)
Copied from a chat where I tried to explain interpretations 3 and 4 a bit more:
I'm not sure what it means for a world to be more real either, but to the extent the idea makes sense in the many worlds interpretation of quantum mechanics (where some Everett branches are somehow "more real" or "exist more") it seems reasonable to extend that to other mathematical structures. One intuition pump is to imagine that the multiverse literally consists of an infinite collection of universal Turing machines, each initialized with a random input tape. So that's #3 in my post. I see #4 as sort of a fallback position, where if there is no fact of the matter about which worlds have more "measure" or "reality fluid" that can be discovered through philosophical reasoning or other kinds of investigation, you can still have UDT "add up to normality" (e.g., make a UDT-based AI behave in a way that a typical human would find reasonable) by hard coding a "prior" into its utility function, which can be interpreted as it simply caring more about some worlds more than others.
comment by TruePath · 2009-12-12T22:14:49.130Z · LW(p) · GW(p)
Your getting yourself in trouble because you assume that puzzling questions must have deep answers when usually the question itself is flawed or misleading. In this case there just seems to be a need for any explanation of the kind you offer nor would be of any use anyway.
These 'explanations' you offer of probability aren't really explaining anything. Certainly we do succesfully use probability to reason about systems that behave in a deterministic classical fashion (rolling dice probably counts). No matter what sort of probability you believe in you have to explain that application. So introducing 'objective' probability merely adds things we need to explain (possible worlds etc..).
The correct approach is to step back and ask what is it that needs explaining. Well probability is really nothing but a fancy way of counting up outcomes. So once we justify describing the world in a probabilistic fashion (even when it's deterministic in some sense) the application of mathematical inference to reformulate that description in more useful ways is untroubling. In other words if it's reasonable to model rolling two six sided dice as being independent uniformly random variables on 1...6 counting up the combinations and saying there is a 1/6 chance of getting a 7 doesn't raise any new difficulties.
So the question just comes down to is it reasonable of us to model the world using random variables?. I mean one might worry that some worlds were deeply 'tricky' in that almost always when it appeared two objects behaved like independent random variables in reality there was some hidden correlation that would eventually pop out to bite you in the ass and then once you'd taken that correlation into account another one would bite you and so on and so on.
But if you think about it for awhile this isn't really so much a question about the nature of the world as it is a purely mathematical question. If we keep factoring out by our best predictions will the remaining unaccounted for variation in outcomes appear to be random, i.e., make modeling it as random variables an accurate way to make predictions? Well that's actually kinda complicated, I have a theorem (well tiny tweak of someone else's theorem plus interpratation) which I believe says that yes indeed it must work this way. I won't go into it here but let me just say one thing to convince you of it's plausibility.
Basically the argument is that things only fail to look random because we notice a more accurate way of predicting their behavior. The only evidence for a sequence of observations failing to be random according to the supposed distribution would be a pattern in the observations not captured by R so would in turn yield a more accurate distribution. So basically the claim is that we can always simply divide up any observable into the part we can predict (i.e. a distribution of outcomes) and the part we can't. Once you mod out by the part you can predict by defintion anything left is totally unpredictable to you (e.g. computable machines) and thus can't detectably fail to look random according to it's distribution since that would be a better prediction.
This isn't rigorous (it's complicatd) but the point is that Randomness is nothing but our inability to make any better predictions
comment by Roko · 2009-12-11T07:16:44.917Z · LW(p) · GW(p)
All possible worlds are real, and probabilities represent how much I care about each world.
Right, so maybe we need to rethink this whole rationality thing, then? I mean, since there are possible worlds where god exists, under this view, the only difference between a creationist and a rational atheist is one of taste?
To me, the god world seems much easier to deal with and more pleasant. So why not shun rationality all together if probabilities are actually arbitrary - if thinking it really does make it so?
Replies from: Wei_Dai, bigbad, jimmy↑ comment by Wei Dai (Wei_Dai) · 2009-12-11T09:29:36.720Z · LW(p) · GW(p)
In this view, rationality doesn't play a role in choosing the initial weights on the possible universes. That job would be handed over to moral philosophy, just like choosing the right utility function already is.
So why not shun rationality all together if probabilities are actually arbitrary - if thinking it really does make it so?
No, thinking it doesn't make it so. Even in this view, the right beliefs and decisions aren't arbitrary, because they depend in a lawful way on your preferences. You still want to be rational in order to make the best decisions to satisfy your preferences.
Replies from: Roko, Jayson_Virissimo↑ comment by Roko · 2009-12-12T02:40:09.556Z · LW(p) · GW(p)
Even in this view, the right beliefs and decisions aren't arbitrary, because they depend in a lawful way on your preferences.
Right, but I don't actually have a strong preference for the simplicity prior that science uses: if I can just choose what kind of reality to endorse - and there is really no fact of the matter about which one is real - it seems silly to endorse the reality based on the occam prior of science. According to science - i.e. according to the probability distribution you get from updating the complexity/occam prior with the evidence - the world is allowed to do lots of horrible things to me, like kill me.
It would be much more pleasant to endorse some other prior - for example, one where everything just happens to work out to match my preferences - the "wishful thinking" prior.
In general, if there is no fact of the matter about what is real, then why would anyone bother to endorse anything other than their own personal wishful thinking as real? It would seem to be irrational not to.
Replies from: Wei_Dai, steven0461↑ comment by Wei Dai (Wei_Dai) · 2009-12-12T04:07:31.275Z · LW(p) · GW(p)
It would be much more pleasant to endorse some other prior - for example, one where everything just happens to work out to match my preferences - the "wishful thinking" prior.
Presumably you don't do that because that's not your actual prior - you don't just care about one particular possible world where things happen to turn out exactly the way you want. You also care about other possible worlds and want to make decisions in ways that make those worlds better.
In general, if there is no fact of the matter about what is real, then why would anyone bother to endorse anything other than their own personal wishful thinking as real?
It would be for the same reason that you don't change your utility function to give everything an infinite utility.
Replies from: Roko↑ comment by Roko · 2009-12-12T06:14:52.401Z · LW(p) · GW(p)
you don't just care about one particular possible world where things happen to turn out exactly the way you want.
Presumably there are infinitely many possible worlds where things happen to turn out exactly the way I want: I care about some small finite subset of the world, and the rest is allowed to vary. Why should I expend energy worrying about one particular infinity of worlds that are hard to optimize when I have already got infinitely many where I win easily or by default?
There are presumably also infinitely many possible worlds where all varieties of bizarre decision/action algorithms are the way to win. For example, the world where the extent to which your preferences get satisfied is determined by what fraction of your skin is covered in red body paint, etc, etc.
Also, there are other classes of worlds where I lose: for example, anti-inductive worlds. Why should I pay special attention to the worlds that loosely obey the occam/complexity prior?
Perhaps I could frame it this way: the complexity prior is (in fact) counterintuitive and alien to the human mind. Why should I pay special attention to worlds that conform to it (simple worlds)?
The answer I used to have was "because it works", which seemed to cache out as
"if I use a complexity prior to repeatedly make decisions, then my subjective experience will be (mostly) of winning"
which I used to think was because the Real world that we live in is, in fact, a simple one, rather than a wishful-thinking one, a red-body-paint one, or an anti-inductive one.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-12T12:12:52.677Z · LW(p) · GW(p)
It sounds like you're assuming that people use a wishful-thinking prior by default, and have to be argued into a complexity-based prior. This seems implausible to me.
I think the phenomenon of wishful thinking doesn't come from one's prior, but from evolution being too stupid to design a rational decision process. That is, a part of my brain rewards me for increasing the anticipation of positive future experiences, even if that increase is caused by faulty reasoning instead of good decisions. This causes me to engage in wishful thinking (i.e., miscalculating the implications of my prior) in order to increase my reward.
Perhaps I could frame it this way: the complexity prior is (in fact) counterintuitive and alien to the human mind.
I dispute this. Sure, some of the implications of the complexity prior are counterintuitive, but it would be surprising if none of them were. I mean, some theorems of number theory are counterintuitive, but that doesn't mean integers are aliens to the human mind.
Why should I pay special attention to worlds that conform to it (simple worlds)?
Suppose someone gave you a water-tight argument that all possible world are in fact real, and you have to make decisions based on which worlds you care more about. Would you really adopt the "wishful-thinking" prior and start putting all your money into lottery tickets or something similar, or would your behavior be more or less unaffected? If it's the latter, don't you already care more about worlds that are simple?
"if I use a complexity prior to repeatedly make decisions, then my subjective experience will be (mostly) of winning"
Perhaps this is just one of the ways an algorithm that cares about each world in proportion to its inverse complexity could feel from the inside?
Replies from: Roko, Roko↑ comment by Roko · 2009-12-13T01:52:32.489Z · LW(p) · GW(p)
"if I use a complexity prior to repeatedly make decisions, then my subjective experience will be (mostly) of winning" - Perhaps this is just one of the ways an algorithm that cares about each world in proportion to its inverse complexity could feel from the inside?
this is a good point, I'll have to think about it.
↑ comment by Roko · 2009-12-12T20:44:20.975Z · LW(p) · GW(p)
Suppose someone gave you a water-tight argument that all possible world are in fact real, and you have to make decisions based on which worlds you care more about. Would you really adopt the "wishful-thinking" prior and start putting all your money into lottery tickets or something similar, or would your behavior be more or less unaffected?
I think that there would be a question about what "I" would actually experience.
There have been times in my younger days when I tried a bit of wishful thinking - I think everyone has. Maybe, just maybe, if I wish hard enough for X, X will happen? Well what you actually experience after doing that is ... failure. Wishing for something doesn't make it happen - or if it does in some worlds, then I have evidence that I don't inhabit those worlds.
So I suppose I am using my memory - which points to me having always been in a world that behaves exactly as the complexity prior would predict - as evidence that the thread of my subjective experience will always be in a world that behaves as the complexity prior would predict, which is sort of like saying that only one particular simple world is real.
Replies from: timtyler↑ comment by timtyler · 2009-12-12T20:50:09.243Z · LW(p) · GW(p)
You don't believe in affirmations? The self-help books about the power of positive thinking don't work for you? What do you make of the following quote?
"Personal optimism correlates strongly with self-esteem, with psychological well-being and with physical and mental health. Optimism has been shown to be correlated with better immune systems in healthy people who have been subjected to stress."
Replies from: Roko↑ comment by Roko · 2009-12-12T21:04:41.013Z · LW(p) · GW(p)
This is not the kind of wishful thinking I was talking about: I was talking about wishing for $1000 and it just appearing in your bank account.
Replies from: timtyler↑ comment by timtyler · 2009-12-12T21:21:28.339Z · LW(p) · GW(p)
When crafting ones wishes, one should have at least some minor element of realism.
Also, your wish should be something your subconscious can help you with. For example, instead of wishfully thinking about money appearing in your bank account, you could wishfully think about finding it on the sidewalk. Or, alternatively you could wishfully think about yourself as a money magnet.
If you previously did not bear such points in mind, you might want to consider revisiting the technique, to see if you can make something of it. Unless you figure you are already too optimistic, that is.
↑ comment by steven0461 · 2009-12-12T05:08:26.961Z · LW(p) · GW(p)
↑ comment by Jayson_Virissimo · 2009-12-11T23:56:31.825Z · LW(p) · GW(p)
Isn't that conflating instrumental rationality and epistemic rationality?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-12T00:28:34.640Z · LW(p) · GW(p)
Epistemic rationality can be seen as a kind of instrumental rationality. See Scoring rule, Epistemic vs. Instrumental Rationality: Approximations.
↑ comment by bigbad · 2009-12-12T17:54:49.864Z · LW(p) · GW(p)
You seem to be confusing plausibility with possibility. The existence of God seems plausible to many people, but whether or not the existence of God is truly possible is not clear. Reasonable people believe that God is impossible, others that God is possible, and others that God is necessary (i.e. God's nonexistance is impossible).
Replies from: Roko↑ comment by jimmy · 2009-12-12T19:14:34.231Z · LW(p) · GW(p)
It wouldn't quite throw all of our shit in the fan. If you know you're living in a QM many worlds universe you still have to optimize the borne probabilities, for example.
I think we can rule out the popular religions as being impossible worlds, but simulated worlds are possible worlds, and in some subset of them, you can know this.
In the one's where you can know differentiate to some degree, there are certainly actions that one could take to help his 'simulated' selves at the cost of the 'nonsimulated' selves, if you cared.
I guess the question is of whether it's even consistent to care about being "simulated" or not, and where you draw the line (what if you have some information rate in from the outside and have some influence over it? What if its the exact same hardware just pluggged in like in 'the matrix'?)
My guess is that it is gonna turn out to not make any sense to care about them differently, and that theres some natural weighting which we haven't yet figured out. Maybe weight each copy by the redundancy in the processor (eg if each transistor is X atoms big, then that can be thought of X copies living in the same house) or by the power they have to influence the world, or something. Both of those have problems, but I can't think of anything better.
Replies from: Roko↑ comment by Roko · 2009-12-13T01:51:12.276Z · LW(p) · GW(p)
I think we can rule out the popular religions as being impossible worlds
There are possible worlds that are pretty good approximations to popular religions.
If you know you're living in a QM many worlds universe you still have to optimize the borne probabilities
I don't understand this...
Replies from: jimmy↑ comment by jimmy · 2009-12-16T05:08:32.778Z · LW(p) · GW(p)
There are possible worlds that are pretty good approximations to popular religions.
True...
I don't understand this...
The paper does a much more thorough job than I, but the summary is that the only consistent way to carve is into borne probabilities, so you have to weight branches accordingly. I think this has to due with the amplitude squared being conserved, so that the ebborians equivalent would be their thickness, but I admit some confusion here.
This means there's at least some sense of probability in which you don't get to 'wish away', though it's still possible to only care about worlds where "X" is true (though in general you actually do care about the other worlds)
Replies from: Roko↑ comment by Roko · 2009-12-16T08:49:17.106Z · LW(p) · GW(p)
There are plenty of possible worlds (infinitely many of them) where quantum mechanics is false; so I don't see how this helps.
Replies from: jimmy↑ comment by jimmy · 2009-12-16T18:58:09.178Z · LW(p) · GW(p)
It means that if you are in one, probability does not come down to only preferences. I suppose that since you can never be absolutely sure you're in one, you still have to find out your weightings between worlds where there might be nothing but preferences.
The other point is that I seriously doubt there's anything built into you that makes you not care about possible worlds where QM is true, so even if it does come down to 'mere preferences', you can still make mistakes.
The existence of an objective weighting scheme within one set of possible worlds gives me some hope of an objective weighting between all possible worlds, but note all that much, and it's not clear to me what that would be. Maybe the set of all possible worlds is countable, and each world is weighted equally?
Replies from: Roko↑ comment by Roko · 2009-12-17T08:20:04.772Z · LW(p) · GW(p)
Maybe the set of all possible worlds is countable, and each world is weighted equally?
I am not really sure what to make of weightings on possible worlds. Overall, on this issue, I think I am going to have to admit that I am thoroughly confused.
By the way, do you mean "finite" here, rather than countable?
Replies from: jimmy↑ comment by jimmy · 2009-12-17T22:59:03.326Z · LW(p) · GW(p)
Yeah, but the confusion gets better as the worlds become more similar. How to weight between QM worlds and nonQM worlds is something I haven't even seen an attempt to explain, but how to weight within QM worlds has been explained, and how to weight in the sleeping beauty problem is quite straight forward.
I meant countable, but now that you mention it I think I should have said finite- I'll have to think about this some more.
comment by Scott Alexander (Yvain) · 2009-12-11T14:31:34.719Z · LW(p) · GW(p)
Before I’ve observed anything, there seems to be no reason to believe that I’m more likely to be in one world than another, but we can’t let all their weights be equal.
We can't? Why not? Estimating the probability of two heads on two coinflips as 25% is giving existence in worlds with heads-heads, heads-tails, tails-heads, and tails-tails equal weight. The same is true of a more complicated proposition like "There is a low probability that Bigfoot exists" - giving every possible arrangement of objects/atoms/information equal weight, and then ruling out the ones that don't result in the evidence we've observed, few of these worlds contain Bigfoot.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-11T16:13:13.336Z · LW(p) · GW(p)
giving every possible arrangement of objects/atoms/information equal weight
Without an arbitrary upper bound on complexity, there are infinitely many possible arrangements.
Replies from: Yvain, sharpneli↑ comment by Scott Alexander (Yvain) · 2009-12-12T17:37:56.646Z · LW(p) · GW(p)
Theoretically, it's not infinite because of the granularity of time/space, speed of light, and so on.
Practically, we can get around this because we only care about a tiny fraction of the possible variation in arrangements of the universe. In a coin flip, we only care about whether a coin is heads-up or tails-up, not the energy state of every subatomic particle in the coin.
This matters in the case of a biased coin - let's say biased towards heads 66%. This, I think, is what Wei meant when he said we couldn't just give equal weights to all possible universes - the ones where the coin lands on heads and the ones where it lands on tails. But I think "universes where the coin lands on heads" and "universes where the coin lands on tails" are unnatural categories.
Consider how the probability of winning the lottery isn't .5 because we choose with equal weight between the two alternatives"I win" and "I don't win". Those are unnatural categories, and instead we need to choose with equal weight between "I win", "John Q. Smith of Little Rock Arkansas wins", "Mary Brown of San Antonio, Texas, wins" and so on to millions of other people. The unnatural category "I don't win" contains millions of more natural categories.
So on the biased coin flip, the categories "the coin lands heads" and "the coin lands tails" contains a bunch of categories of lower-level events about collisions of air molecules and coin molecules and amounts of force one can use to flip a coin, and two-thirds of those events are in the "coin lands heads" category. But among those lower-level events, you choose with equal weight.
True, beneath these lower-level categories about collisions of air molecules, there are probably even lower things like vibrations of superstrings or bits in the world-simulation or whatever the lowest level of reality is, but as long as these behave mathematically I don't see why they prevent us from basing a theory of probability on the effects of low level conditions.
Replies from: Wei_Dai, Nick_Tarleton↑ comment by Wei Dai (Wei_Dai) · 2009-12-13T11:17:59.352Z · LW(p) · GW(p)
Theoretically, it's not infinite because of the granularity of time/space, speed of light, and so on.
These initial weights are supposed to be assigned before taking into account anything you have observed. But even now (under the second interpretation in my list) you can't be sure that the world you're in is finite. So, suppose there is one possible world for each integer in the set of all integers, or one possible world for each set in the class of all sets. How could one assign equal weight to all possible worlds, and have the weights add up to 1?
Practically, we can get around this because we only care about a tiny fraction of the possible variation in arrangements of the universe. In a coin flip, we only care about whether a coin is heads-up or tails-up, not the energy state of every subatomic particle in the coin.
I don't think that gets around the problem, because there is an infinite number of possible worlds where the energy state of nearly every subatomic particle encodes some valuable information.
Replies from: sharpneli↑ comment by sharpneli · 2009-12-13T17:37:44.532Z · LW(p) · GW(p)
How could one assign equal weight to all possible worlds, and have the weights add up to 1?
By the same method we do calculus. Instead of sum of the possible worlds we integrate over the possible worlds (which is a infinite sum of infinitesimally small values). For explicit construction on how this is done any basic calculus book is enough.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-13T22:56:07.058Z · LW(p) · GW(p)
My understanding is that it's possible to have a uniform distribution over a finite set, or an interval of the reals, but not over all integers, or all reals, which is why I said in the sentence before the one you quotes, "suppose there is one possible world for each integer in the set of all integers."
Replies from: pengvado, sharpneli↑ comment by pengvado · 2009-12-14T04:38:02.791Z · LW(p) · GW(p)
There is a 1:1 mapping between "the set of reals in [0,1]" and "the set of all reals". So take your uniform distribution on [0,1] and put it through such a mapping... and the result is non-uniform. Which pretty much kills the idea of "uniform <=> each element has the same probability as each other".
There is no such thing as a continuous distribution on a set alone, it has to be on a metric space. Even if you make a metric space out of the set of all possible universes, that doesn't give you a universal prior, because you have to choose what metric it should be uniform with respect to.
(Can you have a uniform "continuous" distribution without a continuum? The rationals in [0,1]?)
↑ comment by sharpneli · 2009-12-14T07:37:56.073Z · LW(p) · GW(p)
As there is the 1:1 mapping between set of all reals and unit interval we can just use the unit interval and define a uniform mapping there. As whatever distribution you choose we can map it into unit interval as Pengvado said.
In case of set of all integers I'm not completely certain. But I'd look at the set of computable reals which we can use for much of mathematics. Normal calculus can be done with just computable reals (set of all numbers where there is an algorithm which provides arbitrary decimal in a finite time). So basically we have a mapping from computable reals on unit interval into set of all integers.
Another question is that is the uniform distribution the entropy maximising distribution when we consider set of all integers?
From a physical standpoint why are you interested in countably infinite probability distributions? If we assume discrete physical laws we'd have finite amount of possible worlds, on the other hand if we assume continuous we'd have uncountably infinite amount which can be mapped into unit interval.
From the top of my head I can imagine set of discrete worlds of all sizes which would be countably infinite. What other kinds of worlds there could be where this would be relevant?
↑ comment by Nick_Tarleton · 2009-12-13T04:46:01.281Z · LW(p) · GW(p)
Theoretically, it's not infinite because of the granularity of time/space, speed of light, and so on.
(Nitpick: Spacetime isn't quantized AFAIK in standard physics, and then there are still continuous quantum amplitudes.)
This, I think, is what Wei meant when he said we couldn't just give equal weights to all possible universes - the ones where the coin lands on heads and the ones where it lands on tails. But I think "universes where the coin lands on heads" and "universes where the coin lands on tails" are unnatural categories.
I thought Wei was talking about single worlds (whatever those may be), not sets of worlds. Applied to sets of worlds, this seems correct.
↑ comment by sharpneli · 2009-12-13T00:31:54.161Z · LW(p) · GW(p)
Yvain said the finiteness well, but I think the "infinitely many possible arrangements" needs a little elaboration.
In any continuous probability distributions we have infinitely many (actually uncountably infinitely many) possibilities, and this makes the probability of any single outcome 0. Which is the reason why, in the case of continuous distributions, we talk about probability of the outcome being on a certain interval (a collection of infinitely many arrangements).
So instead of counting the individual arrangements we calculate integrals over some set of arrangements. Infinitely many arrangements is no hindrance to applying probability theory. Actually if we can assume continuous distribution it makes some things much easier.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-13T04:46:43.608Z · LW(p) · GW(p)
Good point. Does this work over all infinite sets, though? Integers? Rationals?
Replies from: sharpneli↑ comment by sharpneli · 2009-12-13T10:40:09.457Z · LW(p) · GW(p)
It does work, actually if we're using Integers (there are as many integers as Rationals so we don't need to care about the latter set) we get the good old discrete probability distribution where we either have finite number of possibilities or at most countable infinity of possibilities, e.g set of all Integers.
Real numbers are strictly larger set than integers, so in continuous distribution we have in a sense more possibilities than countably infinite discrete distribution.
comment by ricksterh4 · 2009-12-11T12:35:25.546Z · LW(p) · GW(p)
Hmmm - caring as a part of reality? Why not just flip things up, and consider that emotion is also part of reality. Random by any other name. Try to exclude it and you'll find you can't no matter how infinitely many worlds you suppose. There's also calculus to irrationality . . .
Replies from: pengvado↑ comment by pengvado · 2009-12-11T13:02:16.592Z · LW(p) · GW(p)
The "caring" interpretation doesn't say that caring is part of reality (except insofar as minds are implemented in reality). Rather, it says that probability isn't part of reality, it's part of decision theory (again except insofar as minds are implemented in reality).
Replies from: ricksterh4↑ comment by ricksterh4 · 2009-12-11T16:05:57.211Z · LW(p) · GW(p)
cool! but can you really posit artificial intelligence (decision theory has to get enacted somewhere) and not allow mind as part of reality?
comment by ESRogs · 2015-08-18T18:28:49.880Z · LW(p) · GW(p)
All possible worlds are real, and probabilities represent how much I care about each world. ... Which worlds I care more or less about seems arbitrary.
This view seems appealing to me, because 1) deciding that all possible worlds are real seems to follow from the Copernican principle, and 2) if all worlds are real from the perspective of their observers, as you said it seems arbitrary to say which worlds are more real.
But on this view, what do I do with the observed frequencies of past events? Whenever I've flipped a coin, heads has come up about half the time. If I accept option 4, am I giving up on the idea that these regularities mean anything?
comment by A1987dM (army1987) · 2011-11-26T18:11:13.177Z · LW(p) · GW(p)
What does real even mean, by the way? Interpretation 1 with real taken to mean ‘of or pertaining to the world I'm in’ (as I would) is equivalent to Interpretation 2 with real taken to mean ‘possible’ (as Tegmark would, IIUC) and to Interpretation 3 with real taken to mean ‘likely’ and to Interpretation 4 with real taken to mean ‘important to me’.
comment by bigbad · 2009-12-12T17:46:01.530Z · LW(p) · GW(p)
It depends. We use the term "probability" to cover a variety of different things, which can be handled by similar mathematics but are not the same.
For example, suppose that I'm playing blackjack. Given a certain disposition of cards, I can calculate a probability that asking for the next card will bust me. In this case the state of the world is fixed, and probability measures my ignorance. The fact that I don't know which card would be dealt to me doesn't change the fact that there's a specific card on the top of the deck waiting to be dealt. If I knew more about the situation (perhaps by counting cards) I might have a better idea of which cards could possibly be on top of the deck, but the same card would still be on top of the deck. In this situation, case 1 applies from the choices above.
Alternately consider photons going through a double slit in the classical quantum physics experiment. If the holes are of equal size and geometry, a photon has a 50% chance of passing through each slit (the probabilities can be adjusted, for example by changing the width of one slit). One of the basic results of quantum physics is that the profile of the light through both slits is not the same as the sum of the profiles of the light through each slit. In general, it is not possible to say which slit a given photon when through, and attempting to make that measurement changes the answer. In this situation, case 3 of the above post seems to apply.
My point is that the post's question can't be answered for probabilities in general. It depends.
comment by prase · 2009-12-11T17:22:42.975Z · LW(p) · GW(p)
The post would be much better if a definition of "possible world" was given. When giving definitions, perhaps to define what does "real" precisely mean would be beneficial.
More or less, I interpret "reality" as all things which can be observed. "Possible", in my language", is something which I can imagine and which doesn't contradict facts that I already know. This is somewhat subjective definition, but possibility obviously depends subjective knowledge. I have flipped a coin. Before I have looked at the result, it was possible that it came up heads. After I have looked at it, it's clear that it came up tails, heads are impossible.
Needless to say, people rarely imagine whole worlds. Rather, they use the word "possible" when speculating about unknow parts of this world. Which may be confusing, since our intuitive understanding of the word doesn't match its use.
Even if defined somehow objectively (as e.g. possible world is any world isomorphic to a formal system with properties X), it seems almost obvious that real world(s) and possible worlds are different categories. If not, there is no need to have distinct names for them.
So before creating theories about what probability means, I suggest we unite the language. These things have been discussed here already several times, but I don't think there is a consensus in interpretation of "possible", "real", "world", "arbitrary". And, after all, I am not sure whether "probability" even should be interpreted using these terms. It almost feels like "probability" is a more fundamental term than "possible" or "arbitrary".
I must admit that I am biased against "possible worlds" and similar phrases, because they tend to appear mostly in theological and philosophical discussions, whose rather empty conclusions are dissatisfactory. I am afraid of lack of guidelines strong enough to keep thinking in limits of rationality.
comment by Tyrrell_McAllister · 2009-12-11T03:03:29.414Z · LW(p) · GW(p)
All possible worlds are real, and probabilities represent how much I care about each world.
Could you elaborate on what it means to have a given amount of "care" about a world? For example, suppose that I assign (or ought to assign) probability 0.5 to a coin's coming up heads. How do you translate this probability assignment into language involving amounts of care for worlds?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2009-12-11T03:09:58.233Z · LW(p) · GW(p)
You care equally for your selves that see heads and your selves that see tails. If you don't care what happens to you after you see heads, then you would assign probability one to tails. Of course, you'd be wrong in about half the worlds, but hey, no skin off your nose. You're the one who sees tails. Those other guys ... they don't matter.
Replies from: timtyler, Tyrrell_McAllister↑ comment by timtyler · 2009-12-12T09:10:56.473Z · LW(p) · GW(p)
A bizarre interpretation.
For example, caring about "living until tomorrow" does not normally mean assigning a zero probability to death in the interim. If anything that would tend to make you fearless - indifferent to whether you stepped in front of a bus or not - the very opposite of what we normally mean by "caring" about some outcome.
↑ comment by Tyrrell_McAllister · 2009-12-11T04:46:58.587Z · LW(p) · GW(p)
Thanks. That makes it a lot clearer.
It seems like this "caring" could be analyzed a lot more, though. For example, suppose I were an altruist who continued to care about the "heads" worlds even after I learned that I'm not in them. Wouldn't I still assign probability ~1 to the proposition that the coin came up tails in my own world? What does that probability assignment of ~1 mean in that case?
I suppose the idea is that a probability captures not only how much I care about a world, but also how much I think that I can influence that world by acting on my values.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-11T23:23:47.705Z · LW(p) · GW(p)
See http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ for more details. Many of my later posts can be considered explanations/justifications for the "design choices" I made in that post.
comment by DanArmak · 2009-12-11T12:05:15.486Z · LW(p) · GW(p)
Why should probabilities mean anything? How how would you behave differently if you decided (or learned) a given interpretation was correct?
As long as there's no difference, and your actions add up to normality under any of the interpretations, then I don't see why an interpretation is needed at all.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-11T20:27:43.537Z · LW(p) · GW(p)
The different interpretations suggest different approaches to answer the question of "what is the right prior?" and also different approaches to decision theory. I mentioned that the "caring" interpretation fits well with UDT.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-11T21:47:18.648Z · LW(p) · GW(p)
Can't you choose your (arational) preferences to get any behaviour (decision theory) no matter what interpretation you choose?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-11T22:01:32.133Z · LW(p) · GW(p)
Preferences may be arational, but they're not completely arbitrary. In moral philosophy there are still arguments for what one's preferences should be, even if they are generally much weaker than the arguments in rationality. Different interpretations influence what kinds of arguments apply or make sense to you, and therefore influence your preferences.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-11T22:12:15.093Z · LW(p) · GW(p)
How can there be arguments about what preferences should be? Aren't they, well, a sort of unmoved mover, a primal cause? (To use some erstwhile philosophical terms :-)
I can understand meta-arguments that say your preferences should be consistent in some sense, or that argue about subgoal preferences given some supergoals. But even under strict constraints of that kind, you have a lot of latitude, from humans to paperclip maximizers on out. Within that range, does interpreting probabilities differently really give you extra power you can't get by finetuning your prefs?
Edit: the reason I'd perfer editing prefs is that talking about the Meaning of Probabilities sets off my materialism sensors. It leads to things like multiple-world theories because they're easy to think about as an inetrpretation of QM, regardless of whether they actually exist. Then they can actually negatively affect our prefs or behavior.
Replies from: Wei_Dai, timtyler↑ comment by Wei Dai (Wei_Dai) · 2009-12-11T22:24:31.991Z · LW(p) · GW(p)
How can there be arguments about what preferences should be?
Well, I don't know what many of my preferences should be. How can I find out except by looking for and listening to arguments?
Aren't they, well, a sort of unmoved mover, a primal cause? (To use some erstwhile philosophical terms :-)
No, not for humans anyway.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-11T22:37:43.048Z · LW(p) · GW(p)
Well, I don't know what many of my preferences should be. How can I find out except by looking for and listening to arguments?
That implies there's some objectively-definable standard for preferences which you'll be able to recognize once you see it. Also, it begs the question of what in your current preferences says "I have to go out and get some more/different preferences!" From a goal-driven intelligence's POV, asking others to modify your prefs in unspecified ways is pretty much the anti-rational act.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2009-12-12T00:23:39.343Z · LW(p) · GW(p)
I think we need to distinguish between what a rational agent should do, and what a non-rational human should do to become more rational. Nesov's reply to you also concerns the former, I think, but I'm more interested in the latter here.
Unlike a rational agent, we don't have well-defined preferences, and the preferences that we think we have can be changed by arguments. What to do about this situation? Should we stop thinking up or listening to arguments, and just fill in the fuzzy parts of our preferences with randomness or indifference, in order to emulate a rational agent in the most direct manner possible? That doesn't make much sense to me.
I'm not sure what we should do exactly, but whatever it is, it seems like arguments must make up a large part of it.
Replies from: Vladimir_Nesov, DanArmak, timtyler↑ comment by Vladimir_Nesov · 2009-12-12T00:36:24.219Z · LW(p) · GW(p)
That arguments modify preference means that you are (denotationally) arriving at different preferences depending on arguments. This means that, from the perspective of a specific given preference (or "true" neutral preference not biased by specific arguments), you fail to obtain optimal rational decision algorithm, and thus to achieve high-preference strategy. But at the same time, "absence of action" is also an action, so not exploring the arguments may as well be a worse choice, since you won't be moving forward towards more clear understanding of your own preference, even if the preference that you are going to understand will be somewhat biased compared to the unknown original one.
Thus, there is a tradeoff:
- Irrational perception of arguments leads to modification of preference, which is bad for original preference, but
- Considering moral arguments leads to a more clear understanding of some preference close to the original one, which allows to make more rational decisions, which is good for the original preference.
↑ comment by DanArmak · 2009-12-12T00:43:46.452Z · LW(p) · GW(p)
Please see my reply to Nesov above, too.
I think we shouldn't try to emulate rational agents at all, in the sense that we shouldn't pretend to have rationality-style preferences and supergoals; as a matter of fact we don't have them.
Up to here we seem to agree, we just use different terminology. I just don't want to conflate rational preferences with human preferences because they the two systems behave very differently.
Just as an example, in signalling theories of behaviour, you may consciously believe that your preferences are very different from what your behaviour is actually optimizing for when noone is looking. A rational agent wouldn't normally have separate conscious/unconscious minds unless only the conscious part was sbuject to outside inspection. In this example, it makes sense to update signalling-preferences sometimes, because they're not your actual acting-preferences.
But if you consciously intend to act out your (conscious) preferences, and also intend to keep changing them in not-always-foreseeable ways, then that isn't rationality, and when there could be confusion due to context (such as on LW most of the time) I'd prefer not to use the term "preferences" about humans, or to make clear what is meant.
↑ comment by Vladimir_Nesov · 2009-12-11T23:13:09.517Z · LW(p) · GW(p)
As an example, consider the arguments in form of proofs/disproofs of the statements that you are interested in. Information doesn't necessarily "change" or "determine arbitrarily" the things you take from it, it may help you to compute an object in which you are already interested, without changing that object, and at the same time be essential in moving forward. If you have an algorithm, it doesn't mean that you know what this algorithm will give you in the end, what the algorithm "means". Resist the illusion of transparency.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T00:13:00.987Z · LW(p) · GW(p)
I don't understand what you're saying as applied to this argument. That Wei Dai has an algorithm for modifying his preferences and he doesn't know what the end output of that algorithm will be?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-12T00:21:09.955Z · LW(p) · GW(p)
There will always be something about preference that you don't know, and it's not the question of modifying preference, it's a question of figuring out what the fixed unmodifiable preference implies. Modifying preference is exactly the wrong way of going about this.
If we figure out the conceptual issues of FAI, we'd basically have the algorithm that is our preferences, but not in infinite and unknowable normal "execution trace" denotational "form".
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T00:35:48.805Z · LW(p) · GW(p)
As Wei says below, we should consider rational agents (who have explicit preferences separate from the rest of their cognitive architecture) separately from humans who want to approximate that in some ways.
I think that if we first define separate preferences, and then proceed to modify them over and over again, this is so different from rational agents that we shouldn't call it preferences at all. We can talk about e.g. morals instead, or about habits, or biases.
On the other hand if we define human preferences as 'whatever human behavior happens to optimize', then there's nothing interesting about changing our preferences, this is something that happens all the time whether we want it to or not. Under this definition Wei's statement that he deliberately makes it happen is unclear (the totality of a human's behaviour, knowledge, etc. is subtly changing over time in any case) so I assumed he was using the former definition.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-12T00:46:12.996Z · LW(p) · GW(p)
There is no clear-cut dichotomy between defining something completely at the beginning and doing things arbitrarily as we go. Instead of defining preference for rational agents, in a complete, finished form, and then seeing what happens, consider a process of figuring out what preference is. This is neither a way to arrive at the final answer, at any point, nor a history of observing of "whatever happens". Rational agent is an impossible construct, but something irrational agents aspire to be, never obtaining. What they want to become isn't directly related to what they "appear" to strive towards.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T00:56:33.408Z · LW(p) · GW(p)
I understand. So you're saying we should indeed use the term 'preference' for humans (and a lot of other agents) because no really rational agents can exist.
Actually, why is this true? I don't know about perfect rationality, but why shouldn't an agent exist whose preferences are completely specified and unchanging?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-12T01:10:58.027Z · LW(p) · GW(p)
I understand. So you're saying we should indeed use the term 'preference' for humans (and a lot of other agents) because no really rational agents can exist.
Right. Except that really rational agents might exist, but not if their preferences are powerful enough, as humans' have every chance to be. And whatever we irrational humans, or our godlike but still, strictly speaking, irrational FAI try to do, the concept of "preference" still needs to be there.
Actually, why is this true? I don't know about perfect rationality, but why shouldn't an agent exist whose preferences are completely specified and unchanging?
Again, it's not about changing preference. See these comments.
An agent can have a completely specified and unchanging preference, but still not know everything about it (and never able to know everything about it). In particular, this is a consequence of halting problem: if you have source code of a program, this code completely specifies whether this program halts, and you may run this code for arbitrarily long time without ever changing it, but still not know whether it halts, and not being able to ever figure that out, unless you are lucky to arrive at a solution in this particular case.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T01:24:24.397Z · LW(p) · GW(p)
OK, I understand now what you're saying. I think the main difference, then, between preferences in humans and in perfect (theoretical) agents is that our preferences aren't separate from the rest of our mind.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-12T01:27:25.047Z · LW(p) · GW(p)
I think the main difference, then, between preferences in humans and in perfect (theoretical) agents is that our preferences aren't separate from the rest of our mind.
I don't understand this point.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T01:42:18.930Z · LW(p) · GW(p)
Rational (designed) agents can have an architecture with preferences (decision making parts) separate from other pieces of their minds (memory, calculations, planning, etc.) Then it's easy (well, easier) to reason about changing their preferences because we can hold the other parts constant. We can ask things like "given what this agent knows, how would it behave under preference system X"?
The agent may also be able to simulate proposed modifications to its preferences without having to simulate its entire mind (which would be expensive). And, indeed, a sufficiently simple preference system may be chosen so that it is not subject to the halting problem and can be reasoned about.
In humans though, preferences and every other part of our minds influence one another. While I'm holding a philosophical discussion about morality and deciding how to update my so-called preferences, my decisions happen to be affected by hunger or tiredness or remembering having had good sex last night. There are lots of biases that are not perceived directly. We can't make rational decisions easily.
In rational agents who are self-modifying preferences, the new prefs are determined by the old prefs, i.e. via second-order prefs. But in humans prefs are potentially determined by the entire state of mind, so perhaps we should talk about "modifying our minds" and not our prefs, since it's hard to completely exclude most of our mind from the process.
Replies from: Vladimir_Nesov, DanArmak↑ comment by Vladimir_Nesov · 2009-12-12T01:50:48.951Z · LW(p) · GW(p)
Then it's easy (well, easier) to reason about changing their preferences because we can hold the other parts constant.
As per Pei Wang's suggestion, I'm stating that I'm going to opt out of this conversation until you take seriously (accept/investigate/argue against) the statement that preference is not to be modified, something that I stressed in several of the last comments.
↑ comment by timtyler · 2009-12-12T10:03:43.882Z · LW(p) · GW(p)
Re: "How can there be arguments about what preferences should be?"
The idea that some preferences are "better" than other ones is known as "moral realism".
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T14:44:06.806Z · LW(p) · GW(p)
Wikipedia says moral realists (in general) claim that moral propositions can be true or false as objective facts but their truth cannot be observed or verified. This doesn't make any sense. Sounds like religion.
Replies from: timtyler, Johnicholas↑ comment by timtyler · 2009-12-12T15:57:49.175Z · LW(p) · GW(p)
Are you looking at http://en.wikipedia.org/wiki/Moral_realism ...?
Care to quote an offending section about moral truths not being observervable or verifiable?
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T16:51:53.110Z · LW(p) · GW(p)
Under the section "Criticisms":
Others are critical of moral realism because it postulates the existence of a kind of "moral fact" which is nonmaterial and does not appear to be accessible to the scientific method. Moral truths cannot be observed in the same way as material facts (which are objective), so it seems odd to count them in the same category. One emotivist counterargument (although emotivism is usually non-cognitivist) alleges that "wrong" actions produce measurable results in the form of negative emotional reactions, either within the individual transgressor, within the person or people most directly affected by the act, or within a (preferably wide) consensus of direct or indirect observers.
Regarding the emotivist criticism, it begs a lot of questions. Surely not all negative emotional reactions signal wrong moral actions. Besides, emotivism isn't aligned with moral realism.
Replies from: timtyler↑ comment by timtyler · 2009-12-12T18:15:13.399Z · LW(p) · GW(p)
I see - thanks.
That some criticisms of moral realism appear to lack coherence does not seem to me to be a point that counts against the idea.
I expect moral realists would deny that morality is any more nonmaterial than any other kind of information - and would also deny that it does not appear to be accessible to the scientific method.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T19:09:01.816Z · LW(p) · GW(p)
If moral realism acts as a system of logical propositions and deductions, then it has to have moral axioms. How are these grounded in material reality? How can they be anything more than "because i said so and I hope you'll agree"? Isn't the choice of axioms done using a moral theory nominally opposed to moral realism, such as emotivism, or (amoral) utilitarianism?
Replies from: timtyler↑ comment by timtyler · 2009-12-12T20:12:19.862Z · LW(p) · GW(p)
One way would be to consider the future of civilization. At the moment, we observe a Shifting Moral Zeitgeist. However, in the future we may see ideas about how to behave towards other agents settle down into an optimal region. If that turns out to be a global optimum - rather than a local one - i.e. much the same rules would be found by most surviving aliens - then that would represent a good foundation for the ideas of moral realism.
Even today, it should be pretty obvious that some moral systems are "better" than others ("better" in the sense of promoting the survival of those systems). That doesn't necessarily mean there's a "best" one - but it leaves that possibility open.
↑ comment by Johnicholas · 2009-12-12T18:05:37.402Z · LW(p) · GW(p)
It might also sound like science - don't scientists generally claim that propositions about the world can be true or false, but cannot be directly observed or verified?
Joshua Greene's thesis "The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it" might be a decent introduction to moral realism / irrealism. Overall it is an argument for irrealism.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-12T19:15:09.541Z · LW(p) · GW(p)
In science, a proposition about the world can generally be proven or disproven with arbitrary probability, so you can become as sure about it as you like if you invest enough resources.
In moral realism, propositions are purely logical constructs, and can be proven true or false just like a mathematica proposition. Their truth is one with the truth of the axioms used, and the axioms can't be proven or disproven with any degree of certainty; they are simply accepted or not accepted. The morality is internally consistent, but you can't derive it from the real world, and you can't derive any fact about the real world from the morality. That sounds just like theology to me. (The difference between this and ordinary math or logic, is that mathematical constructs aren't supposed to lead to should or ought statements about behavior.)
I will read Greene's thesis, but as far as I can tell it argues against moral realism (and does it well), so it won't help me understand why anyone would believe in it.