The Pascal's Wager Fallacy Fallacy
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T00:30:00.000Z · LW · GW · Legacy · 128 commentsContents
128 comments
Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models. The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.
So I observed that:
- Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)
- If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.
And the one said, "Isn't that a form of Pascal's Wager?"
I'm going to call this the Pascal's Wager Fallacy Fallacy.
You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"
The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity [? · GW] of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).
However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.
And then, once the reasoning is perceptually recognized as an instance of "the Pascal's Wager fallacy", the other characteristics of the fallacy are automatically inferred [? · GW]: they assume that the probability is tiny and that the scenario has no specific support apart from the payoff.
But infinite physics and cryonics are both possibilities that, leaving their payoffs entirely aside, get significant chunks of probability mass purely on merit.
Yet instead we have reasoning that runs like this:
- Cryonics has a large payoff;
- Therefore, the argument carries even if the probability is tiny;
- Therefore, the probability is tiny;
- Therefore, why bother thinking about it?
(Posted here instead of Less Wrong, at least for now, because of the Hanson/Cowen debate on cryonics.)
Further details:
Pascal's Wager is actually a serious problem for those of us who want to use Kolmogorov complexity as an Occam prior, because the size of even the finite computations blows up much faster than their probability diminishes (see here [? · GW]).
See Bostrom on infinite ethics for how much worse things get if you allow non-halting Turing machines.
In our current model of physics, time is infinite, and so the collection of real things is infinite. Each time state has a successor state, and there's no particular assertion that time returns to the starting point. Considering time's continuity just makes it worse - now we have an uncountable set of real things!
But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.
The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.
On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".
So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.
And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person [? · GW]). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.
128 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-19T07:03:07.000Z · LW(p) · GW(p)
Ask yourself if you would want to revive someone frozen 100 years ago.
Yes. They don't deserve to die. Kthx next.
comment by Carl_Shulman · 2009-03-18T01:52:46.000Z · LW(p) · GW(p)
"that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God)." Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.
A more important criticism is that humans just physiologically don't have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in 'shutting up and multiplying.'
Replies from: Peter_de_Blanc, steven0461↑ comment by Peter_de_Blanc · 2010-04-24T04:35:46.106Z · LW(p) · GW(p)
utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences
I know this is not what you were suggesting, but this made me think of goal systems of the form "take the action that I think idealized agent X is most likely to take," e.g. WWAIXID.
A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you'll end up acting as if you believed with near-certainty the single most likely scenario you can think of.
Another problem, of course, is that you'll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet $1 million that it can beat Cho Chikun at Go.
Replies from: gjm↑ comment by steven0461 · 2010-04-24T05:27:58.413Z · LW(p) · GW(p)
This seems like a non-standard way of thinking that needs some explanation. It's not clear to me that it matters whether my emotions scale linearly, if I'll reflectively endorse the statement "if there are X good things, and you add an additional good thing, the goodness of that doesn't depend on what X is". It's also not clear to me that utilitarians can be seen as having an intrinsic preference for utilitarian behavior as opposed to a belief that their "true" preferences are utilitarian.
comment by Benya_Fallenstein (Benja_Fallenstein) · 2009-03-19T12:00:10.000Z · LW(p) · GW(p)
Ask yourself if you would want to revive someone frozen 100 years ago. Yes. They don't deserve to die. Kthx next.
I wish that this were on Less Wrong, so that I could vote this up.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2011-09-23T16:46:27.000Z · LW(p) · GW(p)
It is now.
Replies from: Benja↑ comment by Benya (Benja) · 2013-03-30T11:39:00.902Z · LW(p) · GW(p)
Very well. Upvoted now!
comment by AnnaSalamon · 2009-03-18T06:03:55.000Z · LW(p) · GW(p)
The “isn’t that like Pascal’s wager?” response is plausibly an instance of dark side epistemology, and one that affects many aspiring rationalists.
Many of us came up against the Pascal’s wager argument at some point before we gained much rationality skill, disliked the conclusion, and hunted around for some means of disagreeing with its reasoning. The overcomingbias thread discussing Pascal’s wager strikes me as including a fair number of fallacious comments aimed at finding some rationale, any rationale, for dismissing Pascal’s wager.
If these arguments tended merely to be about factual matters (“Pascal’s wager can’t be true, because, um, the moon moves in such-and-such a manner”), attempts to dismiss Pascal’s wager without solid argument would perhaps not be all that problematic . But in the specific case of Pascal’s wager, ad hoc rationalizations for its dismissal tend to center around methodological claims: people dislike the conclusion, and so make claims against expected value calculations, expected value calculations’ applicability to high payoffs, or other inference or decision theoretic methodologies. This is exactly dark side epistemology; someone dislikes a conclusion, does not have a principled derivation of why the conclusion should be false, and so seizes on some methodology or other to bolster their dismissal -- and then imports that methodology into the rest of their life, with harmful consequences (e.g., avoiding cryonics).
I’m not endorsing Pascal’s wager. Carl’s critique (above, and also in the original thread) strikes me as valid. It’s just that we need to be really really careful about making up rationalizations and importing those rationalized methodologies elsewhere; the rationalizations can hurt us even in cases where the rationalized conclusion turns out to be true. And Pascal’s wager is such an easy situation in which to make methodological rationalizations -- it sounds absurd, it involves religion, which is definitely uncool, and its claimed conclusion threatens things many of us care about, such as how we live our lives and form beliefs. We might need “be specially on guard!” routines for such situations.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-19T21:42:21.000Z · LW(p) · GW(p)
Yvain, while it's hard to get a feel on what exactly happens when one of the meddling dabblers tries to give their AI a goal system, I would mostly expect those AIs to end up as paperclip maximizers, or at most, tiling the universe with tiny molecular smiley-faces. Nothing sentient.
Most AIs gone wrong are just going to dissassemble you, not hurt you. I think I've emphasized this a number of times, which is why it's surprising that I've seen both you and Robin Hanson, respectable rationalists both, go on attributing the opposite opinion to me.
comment by Yvain2 · 2009-03-18T01:43:43.000Z · LW(p) · GW(p)
"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."
That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred years). I don't see any reason why the taboo on suicide must disappear. And any society advanced enough to revive me has by definition conquered death, so I can't just wait it out and die of old age. I place about 50% odds on not being able to die again after I get out.
I'm also less confident the future wouldn't be a dystopia. Even in the best case scenario the future's going to be scary through sheer cultural drift (see: legalized rape in Three Worlds Collide). I don't have to tell you that it's easier to get a Singularity that goes horribly wrong than one that goes just right, and even if we restrict the possibilities to those where I get revived instead of turned into paperclips, they could still be pretty grim (what about some well-intentioned person hard-coding in "Promote and protect human life" to an otherwise poorly designed AI, and ending up with something that resurrects the cryopreserved...and then locks them in little boxes for all eternity so they don't consume unnecessary resources.) And then there's just the standard fears of some dictator or fundamentalist theocracy, only this time armed with mind control and total surveillance so there's no chance of overthrowing them.
The deal-breaker is that I really, really don't want to live forever. I might enjoy living a thousand years, but not forever. You could change my mind if you had a utopian post-singularity society that completely mastered Fun Theory. But when I compare the horrible possibility of being forced to live forever either in a dystopia or in a world no better or worse than our own, to the good possibility of getting to live between thousand years and forever in a Fun Theory utopia that can keep me occupied...well, the former seems both more probable and more extreme.
Replies from: Ulysses, Gurkenglas, Ulysses↑ comment by Ulysses · 2011-01-22T04:35:29.114Z · LW(p) · GW(p)
The threat of dystopia stresses the importance of finding or making a trustworthy, durable institution that will relocate/destroy your body if the political system starts becoming grim.
Of course there is no such thing. Boards can become infiltrated. Missions can drift. Hostile (or even well-intentioned) outside agents can act suddenly before your guardian institution can respond.
But there may be measures you can take to reduce fell risk to acceptable levels (i.e: levels comparable to current risk of exposure to, as Yudkowsky mentioned, secret singularity-in-a-basement):
You could make contracts with (multiple) members of the younger generation of cryonicists, on condition that they contract with their younger generation, etc. to guard your body throughout the ages.
You can hide a very small bomb in your body that continues to countdown slowly even while frozen (don't know if we have the technology yet, but it doesn't sound too sophisticated) so as to limit the amount of divergence from now that you are willing to expose yourself to [explosion small enough to destroy your brain, but not the brain next to you].
You can have your body hidden and known only to cryonicist leaders.
You can have your body's destruction forged.
I don't think any combination of THESE suggestions will suffice. But it is worth very much effort inventing more (and not necessarily sharing them all online), and making them possible if you are considering freezing yourself.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-02-02T05:38:25.014Z · LW(p) · GW(p)
Hi.
↑ comment by Gurkenglas · 2013-05-18T20:24:05.998Z · LW(p) · GW(p)
There is a minuscule probability that during the next 10 seconds, nanomachines produced by a fresh GAI sweep in through your window and capture you for infinite life and thus, by your argument, infinite hell. Building on your argumentation, the case can be made that you should strive to minimize the probability of that outcome. Therefore, suicide.
Edit: My point has already been made by Eliezer. Lets see how this retracting thingy works.
↑ comment by Ulysses · 2011-01-22T04:17:36.636Z · LW(p) · GW(p)
The threat of dystopia stresses the importance of finding or making a trustworthy, durable institution that will relocate/destroy your body if the political system starts becoming grim.
Of course there is no such thing. Boards can become infiltrated. Missions can drift. Hostile (or even well-intentioned) outside agents can act suddenly before your guardian institution can respond.
But there may be measures you can take to reduce fell risk to acceptable levels. You could make contracts with (multiple) members of the younger generation of cryonicists, on condition that they contract with their younger generation, etc. to guard your body throughout the ages. You can hide a very small bomb in your body that continues to countdown slowly even while frozen (don't know if we have the technology yet, but it doesn't sound too sophisticated) so as to limit the amount of divergence from now that you are willing to expose yourself to [explosion small enough to destroy your brain, but not the brain next to you] You can have your body hidden and known only to cryonicist leaders. You can have your body's destruction forged.
No matter what arrangements you make, if you choose to freeze yourself you can never get the probability of being indefinitely tortured upon reanimation down to zero. So what is an acceptable level of risk? I'll give you a lower bound: the probability that a terrorist group has already secretly figured out how to extend life indefinitely, and is on route to kidnap you now.
I don't think all the suggestions I made put together will suffice. But it is worth very much effort inventing more (and not necessarily sharing them all online), and making them possible if you are considering freezing yourself.
comment by JohnH · 2011-05-06T03:27:57.108Z · LW(p) · GW(p)
No one pointed this out but Muslims consider Christians to be people of the Book and allow them to go to heaven assuming they are good Christians.
Further, Hindu and Buddhists believe in reincarnation and believe that if one is a good Christian one will become reincarnated possibly as a Hindu or Buddhist next time around so it is safe to ignore them in calculating Pascals wager. Also, the Hindu's have a claim that Christians, Islam, and Judaism all worship the Hindu Brahmam.
Catholics also since Vatican II believe that it is possible for everyone that is not Catholic to go to heaven and in particular accept most other Christian baptisms as being valid.
If one were to look at pascals wager from a strict perspective of what each religion believes about other religions going to Hell or Heaven then one would be left with only a relatively few possible choices such that believing in one of them will send you to hell from the others perspective. If one further considers as evidence the number of believers (or other religions) that allow one the possibility to reach heaven then the groups with the most evidence would be some ultra-conservative evangelical group that thinks that all Catholics and mainstream Protestants are going to Hell but are part of a greater communion that is recognized by the Catholic Church, so like the Westboro Baptist if that group were actually Baptist.
Pascal's Wager seems like a poor way to choose a religion if one is more concerned with what is actually true. It is also a poor way to choose a religion if one actually believes that God hears and answers prayers. I am highly biased on this point though and claim additional information.
comment by Jay · 2009-03-19T03:01:09.000Z · LW(p) · GW(p)
Ask yourself if you would want to revive someone frozen 100 years ago. Most Americans of the time were unabashedly racist, had little concept of electricity and none of computing, had vaguely heard of automobiles, etc. They'd be awakened into a world that they don't understand, a world that judges them by mysterious criteria. It would be worse than being foreign, because the new culture's values were formed at least partially in reaction to the perceived problems of the past.
comment by Yvain2 · 2009-03-19T20:30:45.000Z · LW(p) · GW(p)
"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."
Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I should probably think harder before I become certain that I can make this prediction with something more complicated like my life. I know that many of the very elderly people I know claim they're tired of life and just want to die already, and I predict that I have no special immunity to this phenomenon that will let me hold out forever. But I don't know how much of that is caused by literally being bored with what life has to offer already, and how much of it is caused by decrepitude and inability to do interesting things.
"Evil is far harder than good or mu, you have to get the future almost right for it to care about people at all, but somehow introduce a sustainable evil twist to it."
In all of human society-space, not just the ones that have existed but every possible combination of social structures that could exist, I interpret only a vanishingly small number (the ones that contain large amounts of freedom, for example) as non-evil. Looking over all of human history, the number of societies I would have enjoyed living in are pretty minimal. I'm not just talking about Dante's Hell here. Even modern day Burma/Saudi Arabia, or Orwell's Oceania would be awful enough to make me regret not dying when I had the chance.
I don't think it's so hard to get a Singularity that leaves people alive but is still awful. If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.
"Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else."
I think that's false. In most cases I imagine, torturing people is not the terminal value of the dystopia, just something they do to people who happen to be around. In a pre-singularity dystopia, it will be a means of control and they won't have the resources to 'create' people anyway, (except the old-fashioned way). In a post-singularity dystopia, resources won't much matter and the AI's more likely to be stuck under injunctions to protect existing people than trying to create new ones (unless the problem is the Mere Addition Paradox). Though I admit it would be a very specific subset of rogue AIs that view frozen heads as "existing people".
"Though I hesitate to point this out, the same logic against cryonic suspension also implies that egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now. A good number of arguments against cryonics also imply suicide in the present."
I'm glad you hesitated to point it out. Luckily, I'm not as rationalist as I like to pretend :) More seriously, I currently have a lot of things preventing me from suicide. I have a family, a debt to society to pay off, and the ability to funnel enough money to various good causes to shape the future myself instead of passively experience it. And less rationally but still powerfully, I have the self-preservation urge pretty strongly that would probably kick in if I tried anything. Someday when the Singularity seems very near, I really am going to have to think about this more closely. If I think a dictator's about to succeed on an AI project, or if I've heard about the specifics of the a project's code and the moral system seems likely to collapse, I do think I'd be sitting there with a gun to my head and my finger on the trigger.
comment by Nick_Tarleton · 2009-03-18T23:10:20.000Z · LW(p) · GW(p)
I think a heuristic something like this is often involved: "If someone claims a high benefit (at any probability) for some costly implausible course of action, there's a good chance they're (a) consciously trying to exploit me, (b) infected by a parasitic meme, or (c) getting off on the delusion that they have a valuable Cause. In any of those cases, they'll probably have plenty of persuasive invalid arguments; if I try to analyze these, I may be convinced in spite of myself, so I'd better find whatever justification I can to stop thinking."
vroman: See The Least Convenient Possible World.
Carl: Islam and Christianity may not balance, but what about Christianity and anti-Christianity?
comment by g · 2009-03-18T08:59:20.000Z · LW(p) · GW(p)
Eliezer, it seems to me that you may be being unfair to those who respond "Isn't that a form of Pascal's wager?". In an exchange of the form
Cryonics Advocate: "The payoff could be a thousand extra years of life or more!"
Cryonics Skeptic: "Isn't that a form of Pascal's wager?"
I observe that CA has made handwavy claims about the size of the payoff, hasn't said anything about how the utility of a long life depends on its length (there could well be diminishing returns), and hasn't offered anything at all like a probability calculation, and has entirely neglected the downsides (I think Yvain makes a decent case that they aren't obviously dominated by the upside). So, here as in the original Pascal's wager, we have someone arguing "put a substantial chunk of your resources into X, which has uncertain future payoff Y" on the basis that Y is obviously very large, and apparently ignoring the three key subtleties, namely how to get from Y to the utility-if-it-works, what other low-probability but high-utility-delta possibilities there are, and just what the probability-that-it-works is. And, here as with the original wager, if the argument does work then its consequences are counterintuitive to many people (presumably including CS).
That wouldn't justify saying "That is just Pascal's wager, and I'm not going to listen to you any more." But what CS actually says is "Isn't that a form of Pascal's wager?". It doesn't seem to me an unreasonable question, and it gives CA an opportunity to explain why s/he thinks the utility really is very large, the probability not very small, etc.
I think the same goes for your infinite-physics argument.
I don't see any grounds for assuming (or even thinking it likely) that someone who says "Isn't that just a form of Pascal's wager?" has made the bizarrely broken argument you suggest that they have. If they've made a mistake, it's in misunderstanding (or failing to listen to, or not guessing correctly) just what the person they're talking to is arguing.
Therefore: I think you've committed a Pascal's Wager Fallacy Fallacy Fallacy.
comment by Michael_G.R. · 2009-03-18T04:13:36.000Z · LW(p) · GW(p)
Yvain wrote: "The deal-breaker is that I really, really don't want to live forever. I might enjoy living a thousand years, but not forever. "
I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self?
As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time.
comment by orthonormal · 2017-01-10T23:49:12.184Z · LW(p) · GW(p)
How did this post get attributed to [deleted] instead of to Eliezer? I'm 99% sure this post was by him, and the comments seem to bear it out.
Replies from: Elo↑ comment by Elo · 2017-01-11T03:23:28.620Z · LW(p) · GW(p)
I see Eliezer_Yudkowsky as account that it was posted from. Unsure what you are seeing.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2017-01-11T04:41:39.060Z · LW(p) · GW(p)
Additional data point: I see [deleted].
Replies from: Morendil↑ comment by Morendil · 2017-01-11T08:35:59.231Z · LW(p) · GW(p)
Me, as well.
(Edit: looking at Internet Archive's cached snapshots, all of them that I checked look that way to me too.)
(Edit2: it has looked that way to others as well for quite some time. I wouldn't worry about it.)
Replies from: gjmcomment by Jonathan_Graehl · 2010-05-18T01:31:15.340Z · LW(p) · GW(p)
It's odd that the article author shows as [deleted] (Eliezer is the author).
Replies from: RobinZcomment by Nick_Tarleton · 2009-03-19T21:36:50.000Z · LW(p) · GW(p)
If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.
For whatever relief it's worth, someone who thought that was a good idea would have a good chance of building a paperclipper instead. "There is a limit to how competent you can be, and still be that stupid."
comment by Yvain2 · 2009-03-19T20:46:11.000Z · LW(p) · GW(p)
One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)
comment by steven · 2009-03-19T14:33:36.000Z · LW(p) · GW(p)
Does nobody want to address the "how do we know U(utopia) - U(oblivion) is of the same order of magnitude as U(oblivion) - U(dystopia)" argument? (I hesitate to bring this up in the context of cryonics, because it applies to a lot of other things and because people might be more than averagely emotionally motivated to argue for the conclusion that supports their cryonics opinion, but you guys are better than that, right? right?)
Carl, I believe the point is that until I know of a specific argument why one is more likely than the other, I have no choice but to set the probability of christianity equal to the probability of anti-christianity, even though I don't doubt such arguments exist. (Both irrationality-punishers and immorality-punishers seem far less unlikely than nonchristianity-punishers, so it's moot as far as I can tell.)
Vladimir, your argument doesn't apply to moralities with an egoist component of some sort, which is surely what we were discussing even though I'd agree they can't be justified philosophically.
I stand by all the arguments I gave against Pascal's wager in the comments to Utilitarian's post, I think.
comment by CarlShulman · 2009-03-19T02:04:53.000Z · LW(p) · GW(p)
Nick,
"Islam and Christianity may not balance, but what about Christianity and anti-Christianity?" Why would you think that Christianity and anti-Christianity plausibly balance exactly? Spend some time thinking about the distribution of evolved minds and what they might simulate, and you'll get divergence.
comment by g · 2009-03-18T23:04:36.000Z · LW(p) · GW(p)
vroman, see the post on Less Wrong about least-convenient possible worlds. And the analogue in Doug's scenario of the existence of (Pascal's) God isn't the reality of the lottery he proposes -- he's just asking you to accept that for the sake of argument -- but your winning the lottery.
comment by Carl_Shulman · 2009-03-18T07:48:29.000Z · LW(p) · GW(p)
Pablo,
Vagueness might leave you unable to subjectively distinguish probabilities, but you would still expect that an idealized reasoner using Solomonoff induction with unbounded computing power and your sensory info would not view the probabilities as exactly balancing, which would give infinite information value to further study of the question.
The idea that further study wouldn't unbalance estimates in humans is both empirically false in the cases of a number of smart people who have undertaken it, and looks like another rationalization.
comment by Carl_Shulman · 2009-03-18T06:28:45.000Z · LW(p) · GW(p)
The fallacious arguments against Pascal's Wager are usually followed by motivated stopping.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T02:53:25.000Z · LW(p) · GW(p)
There is no first-order sentence which is true in all and only finite models and not in any infinite models.
Sketch of conventional proof: The compactness theorem says that if a collection of first-order sentences is inconsistent, then a finite subset of those first-order sentences is inconsistent.
To a sentence or theory true of all finite sets, adjoin the infinite series of statements "This model has at least one element", "This model has at least two elements" (that is, there exist a and b with a != b), "This model has at least three elements" (the finite sentence: exists a, b, c, and a != b, b != c, a != c), and so on.
No finite subset of these statements is inconsistent with the original theory, therefore by compactness the set as a whole is consistent with the original theory. Therefore the original theory possesses an infinite model. QED.
comment by retired_urologist · 2009-03-19T22:27:26.000Z · LW(p) · GW(p)
Isn't there already a good deal of experience regarding the attitudes/actions of the most intelligent entity known (in current times, humans) towards cryonically suspended potential sentient beings (frozen embryos)?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-19T19:02:20.000Z · LW(p) · GW(p)
Nick, Christians are not a majority (and if they were, an alternative course would be to try to shift majority opinions to something easier to believe, preferably before you died but it has to get done...)
I'm not claiming that U(utopia) - U(oblivion) ~ U(oblivion) - U(dystopia + revival + no suicide), but the question is whether the factor describing the relative interval, is greater than the factor of diminished probability for U(dystopia + revival + no suicide), which seems large. Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.
Though I hesitate to point this out, the same logic against cryonic suspension also implies that egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now. A good number of arguments against cryonics also imply suicide in the present.
comment by Michael_G.R. · 2009-03-19T14:36:09.000Z · LW(p) · GW(p)
"Most Americans of the time were unabashedly racist, had little concept of electricity and none of computing, had vaguely heard of automobiles, etc."
So if you woke up in a strange world with technologies you don't understand (at first) and mainstream values you disagree with (at first), you would rather commit suicide than try to learn about this new world and see if you can have a pleasant life in it?
comment by Nick_Tarleton · 2009-03-19T02:19:44.000Z · LW(p) · GW(p)
Why would you think that Christianity and anti-Christianity plausibly balance exactly?
Because I've been thinking about algorithmic complexity, not the actions of agents. Good point.
comment by vroman · 2009-03-19T01:57:01.000Z · LW(p) · GW(p)
I read and understood the Least convenient possible world post. given that, then let me rephrase your scenario slightly
If every winner of a certain lottery receives $X * 300 million, a ticket costs $X, the chances of winning are 1 in 250 million, you can only buy one ticket, and $X represents an amount of money you would be uncomfortable to lose, would you buy that ticket?
answer no. If the ticket price crosses a certain threshold, then I become risk averse. if it were $1 or some other relatively inconsequential amount of money, then I would be rationally compelled to buy the nearly-sure loss ticket.
Replies from: SecondWind↑ comment by SecondWind · 2013-05-07T15:04:38.010Z · LW(p) · GW(p)
If you'd be rationally compelled to buy one low-cost ticket, then after you've bought the ticket you should be rationally compelled to buy a ticket. And then rationally compelled to buy a ticket.
Sure, at each step you're approaching the possibility with one fewer dollar, but by your phrasing, the number of dollars you have does not influence your decision to buy a ticket (unless you're broke enough that $1 is not longer a relatively inconsequential amount of money). This method seems to require an injunction against iteration.
comment by Doug_S. · 2009-03-18T21:48:20.000Z · LW(p) · GW(p)
What if we phrase a Pascal's Wager-like problem like this:
If every winner of a certain lottery receives $300 million, a ticket costs $1, the chances of winning are 1 in 250 million, and you can only buy one ticket, would you buy that ticket?
There's a positive expected value in dollars, but 1 in 250 million is basically not gonna happen (to you, at least).
comment by Carl_Shulman · 2009-03-18T09:17:19.000Z · LW(p) · GW(p)
g,
This is based on the diavlog with Tyler Cowen, who did explicitly say that decision theory and other standard methodologies doesn't apply well to Pascalian cases.
comment by Larry_D'Anna · 2009-03-18T02:50:43.000Z · LW(p) · GW(p)
"first-order logic cannot, in general, distinguish finite models from infinite models."
Specifically, if a fist order theory had arbitrarily large finite models, then it has an infinite one.
comment by Anonymous56 · 2009-03-18T02:18:15.000Z · LW(p) · GW(p)
Johnicholas:
I agree with your sentiment, however:
There is a perfectly good description of the real numbers that is not ugly. Namely, the real numbers are a complete Archimedean ordered field.
To actually construct them, I think using (Cauchy) convergent sequences of rational numbers would be much less ugly than using Dedekind cuts.
Also, the Löwenheim–Skolem theorem only applies to first-order logic, not second-order logic. Why are you constraining me to use only first-order logic? You have to explain that first.
comment by Luke_A_Somers · 2011-10-09T04:42:13.124Z · LW(p) · GW(p)
At a more practical level, Pascal's Wager's main failure is to strategically believe rather than rationally believe. Also, the notion that God would put up with a belief of that sort.
This particular failure mode applies to very few other arguments.
Replies from: wedrifid↑ comment by wedrifid · 2011-10-09T04:47:28.682Z · LW(p) · GW(p)
At a more practical level, Pascal's Wager's main failure is to strategically believe rather than rationally believe. Also, the notion that God would put up with a belief of that sort.
That isn't a failure mode. Strategic belief is a perfectly valid desiradum maximization strategy. The only time when strategic belief is an actual failure mode is when you intrinsically value correct belief. In which case you don't strategic belief and so do not fail.
comment by TimFreeman · 2011-05-05T21:40:17.451Z · LW(p) · GW(p)
If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.
The problem with Pascal's Wager is that it allows absurdly large utilities into the equation. If I'm looking at a nice fresh apple, and it's 11:45am just before lunch, and breakfast was at 7am, then suppose the utility increment from eating that apple is X. I'd subjectively estimate that my utility for the best possible future (Heaven for Pascal's wager, the infinite wonderful future in the scenario quoted above) is a utility increment less than one trillion times X, probably less than a billion, perhaps more than a million, definitely more than a thousand. If we make the increment much more, say 3^^^3 times X, then we get into Pascal's Wager problems.
comment by Vichy · 2009-06-01T22:37:27.153Z · LW(p) · GW(p)
"If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence." Doesn't this arbitrarily favor future events? But future-self isn't current-self, it's literally a different person. Distinguishing between desirable outcomes is tautological, your values precede evaluation.
comment by ad2 · 2009-03-19T21:24:48.000Z · LW(p) · GW(p)
I don't have to tell you that it's easier to get a Singularity that goes horribly wrong than one that goes just right
Don't the acceleration-of-history arguments suggest that there will be another singularity, a century or so after the next one? And another one shortly after that, etc?
What are the chances that they will all go exactly right for us?
comment by steven · 2009-03-19T21:18:23.000Z · LW(p) · GW(p)
Nick, I'm now sitting here being inappropriately amused at the idea of Hal Finney as Dark Lord of the Matrix.
Eliezer, thanks for responding to that. I'm never sure how much to bring up this sort of morbid stuff. I agree as to what the question is.
Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.
It was Vladimir who pointed that out, I just said it doesn't apply to egoists. I actually don't agree that it applies to altruists either; presumably most anything that cared that much about torturing newly created people would also use cryonauts for raw materials. Also, maybe there are "people who are still alive" considerations.
comment by Vladimir_Nesov · 2009-03-19T15:57:12.000Z · LW(p) · GW(p)
Steven, to account for the especially egoist morality, all you need to do is especially value future-you. I don't see how it changes my points.
comment by Nick_Tarleton · 2009-03-19T15:44:16.000Z · LW(p) · GW(p)
irrationality-punishers and immorality-punishers seem far less unlikely than nonchristianity-punishers
If you mean "in rough proportion to the algorithmic complexity of Christianity", nonmajoritarianism-punishers, and presumably plenty of other simple entities, would effectively be nonchristianity-punishers. Probably still true, though.
comment by Nick_Tarleton · 2009-03-19T02:23:38.000Z · LW(p) · GW(p)
Specifically, thinking of the algorithmic complexity of the religion - if I were to use priors here, I should be thinking about utility(belief)*prior probability of algorithms computing functions from beliefs to reward or punishment.
comment by Dmitriy_Kropivnitskiy · 2009-03-18T21:24:11.000Z · LW(p) · GW(p)
Pascal Wager != Pascal Wager Fallacy. If original Pascal wager didn't depend on a highly improbable proposition (existence of a particular version of god), it would be logically sound (or at least more sound then it is). So, I don't see a problem comparing cryonics advocacy logic with Pascal's wager.
On the other hand, I find some of the probability estimates cryonics advocates make to be unsound, so for me, this way of cryonics advocacy does look like a Pascal Wager Fallacy. In particular, I don't see why cryonics advocates put high probability values on being revived in the future (number 3 in Robert Hanson's post) and liking the future enough to want to live there (look at Yvain's comment to this post). Also, putting unconditional high utility value on long life span seems to be a doubtful proposition. I am not sure that life of torture is better than non-existence.
comment by Alex_M · 2009-03-18T21:01:10.000Z · LW(p) · GW(p)
Eliezer, thanks I've found material on the holographic principle and did some reading myself. it's an intriguing idea, but an idea so far that has no experimental basis yet. Aside from unconfirmed source of noise in a gravitational wave experiment, it's not known if holographic principle/cosmological information bound actually plays a role. Why did you include that in your post, were you just including another possible example of how universe seems to conspire against our ambitions.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T18:02:41.000Z · LW(p) · GW(p)
Alex, google: "Holographic bound".
comment by Vladimir_Nesov · 2009-03-18T17:31:27.000Z · LW(p) · GW(p)
Steven, even the minus-utility hell won't get worse because it has information useful for the positive-utility eutopia. Only and specifically the positive-utility eutopia could have a use for such information. You win from providing this information in case of a good outcome, and you don't lose in case of a bad outcome.
comment by Vladimir_Nesov · 2009-03-18T14:28:34.000Z · LW(p) · GW(p)
@Yvain: Don't look at the future as containing you, ask what can the future do worse or better, if it's in possession of the information about you. It can reconstruct you-alive using that information, and let the future you enjoy the life in the future, or it could reconstruct you-alive and torture it for eternity. But in which of these cases the future will actually get better or worse, depending on whether you give the future the information about your structure? Is the torture-people future going to get better because you don't give them specifically the information about your brain? That torture-people future must be specifically evil if it cares so much about creating torture experience especially for the real people who lived in the past, as opposed to, say, paperclipping the universe with the torture chambers full of randomly generated people. Evil is far harder than good or mu, you have to get the future almost right for it to care about people at all, but somehow introduce a sustainable evil twist to it.
comment by Pablo_Stafforini_duplicate0.27024432527832687 · 2009-03-18T07:28:30.000Z · LW(p) · GW(p)
Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing.
Utilitarian's reply seems to assume that probability assignments are always precise. We may plausibly suppose, however, that belief states are sometimes vague. Granted this supposition, we cannot infer that one probability is higher than the other from the fact that probabilities do now wind up exactly balancing.
comment by steven · 2009-03-18T01:02:42.000Z · LW(p) · GW(p)
There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.
Expected utility is the product of two things, probability and utility. Saying the probability is smaller is not a complete argument.
comment by Johnicholas · 2009-03-18T00:58:05.000Z · LW(p) · GW(p)
You reference a popular idea, something like "The integers are countable, but the real number line is uncountable." I apologize for nitpicking, but I want to argue against philosophers (that's you, Eliezer) blindly repeating this claim, as if it was obvious or uncontroversial.
Yes, it is strictly correct according to current definitions. However, there was a time when people were striving to find the "correct" definition of the real number line. What people ended up with was not the only possibility, and Dedekind cuts (or various other things) are a pretty ugly, arbitrary construction.
The set containing EVERY number that you might, even in principle, name or pick out with a definition is countable (because the set of names, or definitions, is a subset of the set of strings, which is countable).
The Lowenheim-Skolem theorem says (loosely interpreted) that even if you CLAIM to be talking about uncountably infinite things, there's a perfectly self-consistent interpretation of your talk that refers only to finite things (e.g. your definitions and proofs themselves).
You don't get magical powers of infinity just from claiming to have them. Standard mathematical talk is REALLY WEIRD from a computer science perspective.
Replies from: Eliezer_Yudkowsky, Patrick↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-29T10:44:28.207Z · LW(p) · GW(p)
The Lowenheim-Skolem theorem says (loosely interpreted) that even if you CLAIM to be talking about uncountably infinite things, there's a perfectly self-consistent interpretation of your talk that refers only to finite things (e.g. your definitions and proofs themselves).
Only in first-order logic. In second-order logic, you can actually talk about the natural numbers as distinguished from any other collection, and the uncountable reals.
Amusingly, if you insist that we are only allowed to talk in first-order logic, it is impossible for you to talk about the property "finite", since there is no first-order formula which expresses this property. (Follows from the Compactness Theorem for first-order logic - any set of first-order formulae which are true of unboundedly large finite collections also have models of arbitrarily large infinite cardinality.) Without second-order logic there is no way to talk about this property of "finiteness", or for that matter "countability", which you seem to think is so important.
Replies from: cousin_it, Johnicholas, None↑ comment by cousin_it · 2011-04-29T12:04:44.247Z · LW(p) · GW(p)
You can capture the property "finite" with a first-order sentence over the "standard integers", I think. This leaves open the mystery of what exactly the "standard integers" are, which looks lightly less mysterious than the mystery of "sets" required for second-order logic.
↑ comment by Johnicholas · 2011-04-29T11:11:36.170Z · LW(p) · GW(p)
Yes, that's my understanding as well.
Proof theory for second-order logic seems to be problematic, and I have a formalist stance towards mathematics in general, which leads me to suspect that the standard definitions of second-order logic are somehow smuggling in uncountable infinities, rather than justifying them.
But I admit second-order logic is not something I've studied in depth.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-29T11:47:58.058Z · LW(p) · GW(p)
Yeah, second-order logic is basically set theory in disguise. I'm not sure why Eliezer likes it. Example from the Wikipedia page:
There is a finite second-order theory whose only model is the real numbers if the continuum hypothesis holds and which has no model if the continuum hypothesis does not hold. This theory consists of a finite theory characterizing the real numbers as a complete Archimedean ordered field plus an axiom saying that the domain is of the first uncountable cardinality. This example illustrates that the question of whether a sentence in second-order logic is consistent is extremely subtle.
↑ comment by [deleted] · 2011-04-29T15:40:53.291Z · LW(p) · GW(p)
Amusingly, if you insist that we are only allowed to talk in first-order logic, it is impossible for you to talk about the property "finite", since there is no first-order formula which expresses this property.
An equivalent (and in my opinion less misleading) way of putting this is to say that there's no first-order formula which expresses the property of being infinite.
↑ comment by Patrick · 2011-04-27T12:53:17.351Z · LW(p) · GW(p)
That's not the Lowenheim-Skolem Theorem. You've confused finite with countable (i.e. finite or countably infinite). Here's a simple example of a theory that can't be satisfied with a finite model.
- exists x forall y (x != s(y))
- forall x,y (s(x) = s(y) -> x = y)
Any model that satisfies this must have at least 1 element by axiom 1. Call it 0. s(0) != 0 so the model must have at least 2 elements. s(s(0)) != s(0) by axiom 2. So the model has at least 3 elements.
Suppose we have n distinct elements in our model, all obtained from applying s to 0 the appropriate number of times. Then we need one more, since s(s(...(s(n))) [applied n times] != s(s(...s(n))) [applied n-1 times] (this follows from axiom 2.)
So any model that satisfies these axioms must be infinite. (Incidentally, you can get theories that specify the natural numbers with more precision: see http://en.wikipedia.org/wiki/Robinson_Arithmetic).
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-27T15:06:56.008Z · LW(p) · GW(p)
Mathematicians routinely use "infinite" to mean "infinite in magnitude". For example, the concept "The natural numbers" is infinite in magnitude, but I have picked it out using only 19 ascii characters. From a computer science perspective, it is a finite concept - finite in information content, the number of bits necessary to point it out.
Each of the objects in the set of the Peano integers is finite. The set of Peano integers, considered as a whole, is infinite in magnitude, but finite in information content.
Mathematician's routine speech sometimes sounds as if a generic real number is a small thing, something that you could pick up and move around. In fact, a generic real number (since it's an element of an uncountable set) is infinite in information content - they're huge, and impossible to encounter, much less pick up.
Lowenheim-Skolem allows you to transform proofs that, on a straightforward reading, claim to be manipulating generic elements of uncountable sets (picking up and moving around real numbers for example), with proofs that claim to be manipulating elements of countable sets - that is, objects that are finite in information content.
In that tranformation, you will probably introduce "objects" which are something like "the double-struck N", and those objects certainly still satisfy internal predicates like "InfiniteInMagnitude(the double-struck N)".
However, you're never forced to believe that mathematicians are routinely doing impossible things - you can always take a formalist stance, pointing out that mathematicians are actually manipulating symbols, which are small, finite-in-information-content things.
Replies from: komponisto, None, bogdanb↑ comment by komponisto · 2011-04-27T18:24:30.042Z · LW(p) · GW(p)
However, you're never forced to believe that mathematicians are routinely doing impossible things - you can always take a formalist stance, pointing out that mathematicians are actually manipulating symbols, which are small, finite-in-information-content things.
So, given this, what exactly is your complaint? You started off criticizing Eliezer (and whomever else) for saying "The integers are countable, but the real number line is uncountable" - I suppose on the grounds that everything in the physical universe is countable, or something. (You weren't exactly clear.) But now you point out (correctly) that there is a perfectly good interpretation of this statement which in no way depends on there being an uncountable number of physical things anywhere, or otherwise violates your (not-exactly-well-defined) philosophy. So haven't you just defeated yourself?
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-27T19:20:22.844Z · LW(p) · GW(p)
I have a knee-jerk response, railing against uncountable sets in general and the real numbers in particular; it's not pretty and I know how to control it better now.
Replies from: None↑ comment by [deleted] · 2011-04-27T19:29:46.320Z · LW(p) · GW(p)
I'm fairly confident that for your purposes you could live with the computable numbers (that is: those numbers whose decimal expansion can be computed by ), and as long as you didn't need anything stronger than integration amenable to quadrature, you'd be just fine.
There are people who take this route, but I can't think of any off the top of my head. Knuth once stated that he'd like to write a calculus book roughly following this path, but, well, he's got other things on his mind.
EDIT: I should point out also that the computable numbers are countable (by the usual Godel encoding of whatever machine is rattling off the digits for you), and that for all practical intents and purposes they're probably equivalent to whatever calculus-related mischief is in play at the moment.
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-27T19:38:00.448Z · LW(p) · GW(p)
There's some weirdnesses down that route - for example, it turns out that you can't distinguish zero from nonzero, so the step function is actually uncomputable.
My contrarian claim is that everyone could live with the nameable numbers - that is, the numbers that can be pointed out using a finite number of books to describe them. People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they'd miss.
Replies from: None, Sniffnoy↑ comment by [deleted] · 2011-04-27T19:43:40.700Z · LW(p) · GW(p)
My contrarian claim is that everyone could live with the nameable numbers
I don't understand. Those also seem to fall prey to
it turns out that you can't distinguish zero from nonzero, so the step function is actually uncomputable.
Also,
People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they'd miss.
Lebesgue measure theory, Gal(C/R) = Z/2Z, and some pathological examples in the history of differential geometry without which the current definition of a manifold would have been much more difficult to ascertain.
Off the top of my head. There are certainly other things I would miss.
Replies from: Sniffnoy, Johnicholas↑ comment by Sniffnoy · 2011-04-28T21:50:43.068Z · LW(p) · GW(p)
Gal(C/R) = Z/2Z
I'm confused; this is true for any real closed field. What are you getting at with this?
Replies from: None↑ comment by [deleted] · 2011-04-28T22:05:25.866Z · LW(p) · GW(p)
A mistake. I was thinking of C as the so-called "generic complex numbers." You're right that if you replace C with the algebraic closure of whatever countable model's been dreamed up, then C = R[i] and that's it.
Admittedly I'm only conjecturing that Gal(C/K) will be different for some K countable, but I think there's good evidence in favor of it. After all, if K is the algebraic closure of Q, then Gal(C/K) is gigantic. It doesn't seem likely that one could "fix" the other "degrees of freedom" with only countably many irrationals.
↑ comment by Johnicholas · 2011-04-27T20:15:32.714Z · LW(p) · GW(p)
Those are theories, which are not generally lost if you switch the underlying definitions aptly - and they are sometimes improved (if the new definitions are better, or if the switch demonstrates an abstraction that was not previously known).
People can't pick out specific examples of numbers that are lost by switching to using nameable numbers, they can only gesture at broad classes of numbers, like "0.10147829..., choosing subsequent digits according to no specific rule". If you can describe a specific example (using Lebesgue measure theory if you like), then that description is a name for that number.
Replies from: cousin_it, None↑ comment by cousin_it · 2011-04-28T11:22:28.825Z · LW(p) · GW(p)
I'm not sure what a "nameable number" is. Whatever countable naming scheme you invent, I can "name" a number that's outside it by the usual diagonal trick: it differs from your first nameable number in the first digit, and so on. (Note this doesn't require choice, the procedure may be deterministic.) Switching from reals to nameable numbers seems to require adding more complexity than I'm comfortable with. Also, I enjoy having a notion of Martin-Löf random sequences and random reals, which doesn't play nice with nameability.
Replies from: Sniffnoy, Johnicholas↑ comment by Sniffnoy · 2011-04-28T21:58:53.793Z · LW(p) · GW(p)
By "nameable number" he seems to just mean a definable number - in general an object is called "definable" if there is some first-order property that it and only it satisfies. (Obviously, this dependson just what the surrouding theory is. Sounds like he means "definable in ZFC".) The set of all definable objects is countable, for obvious reasons.
With this definition, your diagonal trick actually doesn't work (which is good, because otherwise we'd have a paradox): Definability isn't a notion expressible in the theory itself, only in the metatheory. Hence if you attempt to "define" something in terms of the set of all definable numbers, the result is only, uh, "metadefinable". (I gave myself a real headache once over the idea of the "first undefinable ordinal"; thanks to JoshuaZ for pointing out to me why this isn't a paradox.)
EDIT: I should point out, using definable numbers seems kind of awful, because they're defined (sorry, metadefined :P ) in terms of logic-stuff that depends on the surrounding theory. Computable numbers, though more restrictive, might behave a little better, I expect...
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory... that makes it rather less awful than I suggested.
Replies from: abramdemski↑ comment by abramdemski · 2013-01-15T11:47:06.559Z · LW(p) · GW(p)
The set of all definable objects is countable, for obvious reasons.
With this definition, your diagonal trick actually doesn't work (which is good, because otherwise we'd have a paradox): Definability isn't a notion expressible in the theory itself, only in the metatheory. Hence if you attempt to "define" something in terms of the set of all definable numbers, the result is only, uh, "metadefinable".
We could similarly argue that the definable objects should be thought of as "meta-countable" rather than countable, right? The reals-implied-by-a-theory would always be uncountable-in-the-theory. (I'm tempted to imagine a world in which this ended the argument between constructivists and classicists, but realistically, one side or the other would end up feeling uneasy about such a compromise... more likely, both.)
Replies from: Sniffnoy↑ comment by Sniffnoy · 2013-01-16T04:44:34.722Z · LW(p) · GW(p)
I think you're confusing levels here. When I spoke of "the surrounding theory" above, I didn't mean the, uh, actual ambient theory. (Sorry about that -- I may have gotten a little mixed up myself) And indeed, like I said, definability only depends on the language, not the theory. Well -- of course it still depends on the actual ambient theory. But working internal to that (which I was doing), it only depends on the language. And then one can talk about the metalanguage, staying internal to the same ambient theory, etc... (mind you, all this is assuming that the ambient theory is powerful enough to talk about this sort of thing).
So at no point was I intending to vary the actual ambient theory, like you seem to be talking about.
Warning: I don't quite understand just how logicians think of these things and so may be confused myself.
↑ comment by Johnicholas · 2011-04-28T12:43:10.094Z · LW(p) · GW(p)
You're correct to point out that I'm being too vague, and I'm making mistakes speaking as if nameable numbers constitute a set or a single alternative to the reals or the rationals.
However, I've been a consumer of theorems and proofs that casually use real numbers as if they're lightweight objects. There is considerable effort involved to parse the underlying concepts out of the theorems and proofs, and re-formalize them using something reasonable (completions of the natural numbers under various operations like additive inverse, multiplicative inverse, square root, roots of polynomials in general, roots of differential equations in general). Those are all different sets of nameable numbers, and they're all countable.
I would prefer that mathematicians routinely perceived "the reals" as a peculiar construction, and instead of throwing it in routinely when working on concepts in geometry or symmetry as the standard tool to modeling positions and distances, thought about what properties they actually need to get the job they're doing done.
Replies from: Nisan, wedrifid, komponisto↑ comment by Nisan · 2011-04-28T15:25:00.142Z · LW(p) · GW(p)
I know of one mathematician who thinks the real numbers are a peculiar construction in the context of topology because of the pathological things you can do with them — continuous nowhere-differentiable curves, space-filling curves, and so on. That's why she studies motivic/A1 homotopy theory instead of classical homotopy theory; only polynomial functions are allowed.
↑ comment by wedrifid · 2011-04-28T13:24:57.177Z · LW(p) · GW(p)
I would prefer that mathematicians routinely perceived "the reals" as a peculiar construction, and instead of throwing it in routinely when working on concepts in geometry or symmetry as the standard tool to modeling positions and distances, thought about what properties they actually need to get the job they're doing done.
Why is it that mathematicians so love the idea of doing their work blindfolded and with their hands tied behind their backs? Someone invented the reals. They're awesome things. And people invented all sorts of techniques you can use the reals for. Make the most of it! Leave proving stuff about when reals are useful to and how such a peculiar construction can be derived and angsting about how deep and convoluted the basis must be to specialists in angsting about how deep and convoluted the basis for using reals is.
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-28T14:15:32.050Z · LW(p) · GW(p)
It's the same as programmers insisting on introducing abstractions decoupling their code from the framework and libraries that they're using; modularity to prevent dependency hell.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-28T14:22:28.065Z · LW(p) · GW(p)
It's the same as programmers insisting on introducing abstractions decoupling their code from the framework and libraries that they're using; modularity to prevent dependency hell.
I think it is the same as programmers choosing to use languages with built in support for floating point calculations and importing standard math and stats libraries as appropriate. This is an alternative to rolling your own math functions to model your calculations based off integers or bytes.
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-28T15:23:15.962Z · LW(p) · GW(p)
Are you suggesting that modeling real numbers with floating point is a good practice?
Yes, it is a standard practice, and it may be the best compromise available for a programmer or a team on a limited time budget, but the enormous differences between real numbers and floating point numbers mean that everything that was true upstream in math-land regarding real numbers, becomes suspect and have to be re-checked, or transformed into corresponding not-quite-the-same theorems and proofs.
If we (downstream consumers of mathematics) could get mathematicians to use realistic primitives, then we could put a lot of numerical analysts out of work (free up their labor to do something more valuable).
Replies from: cousin_it, wedrifid↑ comment by cousin_it · 2011-04-28T18:14:55.584Z · LW(p) · GW(p)
Do you think some constructivist representation of numbers can do better than IEEE floats at removing the need for numerical analysis in most engineering tasks, while still staying fast enough? I'm veeeeery skeptical. It would be a huge breakthrough if it were true.
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-28T20:16:17.729Z · LW(p) · GW(p)
Yes, that's my position. In fact, if you had hardware support for an intuitionistic / constructivist representation (intervals, perhaps), my bet would be that the circuits would be simpler than the floating-point hardware implementing the IEEE standard now.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-28T20:22:59.377Z · LW(p) · GW(p)
I'm not an expert in the field, but it seems to me that intervals require strictly more complexity than IEEE floats (because you still need to do floating-point arithmetic on the endpoints) and will be unusable in many practical problems because they will get too wide. At least that's the impression I got from reading a lot of Kahan. Or do you have some more clever scheme in mind?
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-29T10:31:08.608Z · LW(p) · GW(p)
Yes, if you have to embrace the same messy compromises, then I am mistaken. My belief (which is founded on studying logic and mathematics in college, and then software development after college) is that better foundations, with effort, show through to better implementations.
↑ comment by wedrifid · 2011-04-29T03:31:52.375Z · LW(p) · GW(p)
Are you suggesting that modeling real numbers with floating point is a good practice?
Good, certainly. Not a universally optimal practice though. There are times when unlimited precision is preferable, despite the higher computational overhead. There are libraries for that too.
↑ comment by komponisto · 2011-04-28T21:59:46.005Z · LW(p) · GW(p)
So you would prefer that, instead of having one all-purpose number system that we can embed just about any kind of number we like into (not to mention do all kinds of other things with), we had a collection of distinct number systems, each used for some different ad-hoc purpose? How would this be an improvement?
You might consider the fact that, once upon a time, people actually started with the natural numbers -- and then, over the ages, gradually felt the need to expand the system of numbers further and further until they ended up with the standard objects of modern mathematics, such as the real number system.
This was not a historical accident. Each new kind of number corresponds to a new kind of operation people wanted to do, that they couldn't do with existing numbers. If you want to do subtraction, you need negative numbers; if you want to do division, you need rationals; and if you want to take limits of Cauchy sequences, then you need real numbers.
I don't understand why this should cause computer-programming types any anxiety. A real number is not some kind of mysterious magical entity; it is just the result of applying an operation, call it "lim", to an object called a "sequence" (a_n).
Real numbers are used because people want to be able to take limits (the usefulness of doing which was established decisively starting in the 17th century). So long as you allow the taking of limits, you are going to be working with the real numbers, or something equivalent. Yes, you could try to examine every particular limit that anyone has ever taken, and put them all into a special class (or several special classes), but that would be ad-hoc, ugly, and completely unnecessary.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-04-28T22:11:46.190Z · LW(p) · GW(p)
I think you're being a bit uncharitable here. You've just moved the infinitude/"mysterious magicalness" from talking about real numbers to talking about sequences of rational numbers, and it is in fact possible to classify sequences as definable vs. undefinable, as well as computable vs. uncomputable. (Though definability personally seems a bit ad-hoc to me, seeing as it depends on the ambient theory.) I don't think it's really extraordinary to claim that an undefinable or uncomputable sequence is a bit mysterious and possibly somehow unreal.
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory... that makes it rather less ad-hoc than I suggested.
EDIT: I should probably add, though, that this whole argument seems mostly pointless. Aside from where cousin_it and Jonicholas got to talking about how to represent numbers in computers - and that seems to have hardly anything to do with actual real numbers seeing as those can't be represented in computers - it seems to be basically just Jonicholas saying "I don't like the reals for common constructivist reasons" and other people saying "Regardless, they're valid objects of study". Is there more to it than that?
Replies from: komponisto, Johnicholas↑ comment by komponisto · 2011-05-05T16:33:56.940Z · LW(p) · GW(p)
I think you're being a bit uncharitable here. You've just moved the infinitude/"mysterious magicalness" from talking about real numbers to talking about sequences of rational numbers
That was deliberate. (How was it uncharitable?)
I don't think it's really extraordinary to claim that an undefinable or uncomputable sequence is a bit mysterious and possibly somehow unreal.
It may not be extraordinary, but it's still a confusion. A confusion that was resolved a century ago, when set theory was axiomatized, and the formalist view emerged. The Cantor/Kronecker debate is over: Cantor was right, Kronecker was wrong.
The source of this confusion seems to be a belief that correspondences between mathematical structures and the physical world are properties of the mathematical structures in question, rather than properties of the physical world. This is a kind of map/territory confusion.
Replies from: Sniffnoy, Sniffnoy↑ comment by Johnicholas · 2011-04-29T10:36:48.028Z · LW(p) · GW(p)
No, there's nothing substantive beyond that.
My understanding is this thread was started, and to some extent kept rolling, by an unrelated thread, where I was behaving extremely hostile to EY, and several people went through all my back posts, looking for things to downvote. Patrick found something.
Replies from: Patrick↑ comment by Patrick · 2011-05-03T01:53:33.550Z · LW(p) · GW(p)
To put myself in the clear, I came across this old comment because I was looking through Doug S's old posts (because I was idly curious). I replied to your comment because I'm ridiculously pedantic, a virtue in mathematics. I haven't downvoted any of your comments and I harbor no feelings of antipathy towards you. Eliezer's a big boy and he can take care of himself.
Now, back to the math debate! I don't think it's legitimate to conflate countable sets with sets with finite information content. Here are two counter examples. 1. The set of busy beaver numbers (a subset of the naturals). 2. The digits of Chaitin's Omega [make an ordered pair of (digit, position) to represent these] (see http://en.wikipedia.org/wiki/Chaitin%27s_constant). It's been proved that these sets can't be constructed with any algorithm.
Replies from: Johnicholas, nshepperd↑ comment by Johnicholas · 2011-05-03T17:16:53.530Z · LW(p) · GW(p)
"The set of busy beaver numbers" communicates, points to, refers to, picks out, a concept in natural language that we can both talk about. It has finite information content (a finite sequence of ascii characters suffices to communicate it). An analogous sentence in a sufficiently formal language would still have finite information content.
Note that a description, even if it is finite, is not necessarily in the form that you might desire. Transforming the description "the sequence of the first ten natural numbers" into the format "{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}" is easy, but the analogous transformation of "the first 10 busy beaver numbers" is very difficult if not impossible.
As nshepperd points out, an element of a countable set necessarily has finite information content (you can pick it out by providing an integer - "the fifth one" or whatever), while generic elements of uncountable sets cannot be picked out with finite information.
↑ comment by nshepperd · 2011-05-03T02:19:29.074Z · LW(p) · GW(p)
I think the finite information content comes from being an element of a countable set. Like every other real number, the digits of Chaitin's constant themselves form a countable set (a sequence), while that set is a member of the uncountable R. Similarly, the busy beaver set is a subset of N, and drawn from the uncountable set 2^N.
Countable sets are useful (or rather, uncountable ones are inconvenient) because you can set up a normalized probability distribution over their contents. But... the set {Chaitin's Constant} is countable (it has one element) but I still can't get Omega's digits. So there still seems to be a bit of mystery here.
↑ comment by [deleted] · 2011-04-27T20:52:06.214Z · LW(p) · GW(p)
Those are theories, which are not generally lost...
I really wish I had the time to explicitly write out the reasons why I believe these examples are compelling reasons to use the usual model of the real numbers. I tried, but I've already spent too long and I doubt they would convince you anyway.
People can't pick out specific examples of numbers that are lost by switching to using nameable numbers,
So? Omega could obliterate 99% of the particles in the known universe, and I wouldn't be able to name a particular one. If it turns out in the future that these nameable numbers have nice theoretic properties, sure. The effort to rebuild the usual theory doesn't seem to be worth the benefit of getting rid of uncountability. (Or more precisely, one source of uncountability.)
I think I've spent enough time procrastinating on this topic. I don't see it going anywhere productive.
Replies from: Johnicholas↑ comment by Johnicholas · 2011-04-28T10:37:22.461Z · LW(p) · GW(p)
Suppose someone played idly manipulating small objects, like bottlecaps, in the real world. Suppose they formed (inductively) some hypotheses, picked some axioms and a formal system, derived the observed truths as a consequence of the axioms, and went on to derive some predictions regarding particular long or unusual manipulations of bottlecaps.
If the proofs are correct, the conclusions are true regardless of the outcome of experiments. If you believe that mathematics is self-hosting; interesting and relevant and valuable in itself, that may be sufficient for you. However, you might alternatively take the position that contradictions with experiment would render the previous axioms, theorems and proofs less interesting because they are less relevant.
Generic real numbers, because of their infinite information content, are not good models of physical things (positions, distances, velocities, energies) that a casual consumer of mathematics might think they're natural models of. If you built the real numbers from first-order ZFC axioms, then they do have (via Lowenheim-Skolem) some finite-information-content correspondences - however, those objects look like abstract syntax trees, ramified with details that act as obstacles to finding an analogous structure in the real world.
↑ comment by Sniffnoy · 2011-04-28T21:51:52.322Z · LW(p) · GW(p)
Of course, whether a number is definable or not depends on the surrounding theory. Stick to first-order theory of the reals and only algebraic numbers will be definable! Definable in ZF? Or what?
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory... no difference here between ZF and ZFC...
↑ comment by [deleted] · 2011-04-27T15:20:15.512Z · LW(p) · GW(p)
Löwenheim-Skolem only applies to first-order theories. While there are models of the theory of real closed fields that are countable, referring to those models as "the real numbers" is somewhat misleading, because there isn't only one of them (up to model-theoretic isomorphism).
Also, if you're going to measure information content, you really need to fix a formal language first, or else "the number of bits needed to express X" is ill-defined.
Basically, learn model theory before trying to wield it.
Replies from: Nebu↑ comment by Nebu · 2016-02-01T04:18:06.262Z · LW(p) · GW(p)
Also, if you're going to measure information content, you really need to fix a formal language first, or else "the number of bits needed to express X" is ill-defined.
Basically, learn model theory before trying to wield it.
I don't know model theory, but isn't the crucial detail here whether or not the number of bits needed to express X is finite or infinite? If so, then it seems we can handwave the specific formal language we're using to describe X, in the same way that we can handwave what encoding for Turing Machines generally when talking about Kolmogorov complexity, even though to actually get a concrete integer K(S) representing the Kolmogorovo complexity of a string S requires us to use a fixed encoding of Turing Machines. In practice, we never actually care what the number K(S) is.
Replies from: None↑ comment by [deleted] · 2016-02-02T02:43:08.105Z · LW(p) · GW(p)
Let's say I have a specific model of the real numbers in mind, and lets pretend "number of bits needed to describe X" means "log2 the length of the shortest theory that proves the existence of X."
Fix a language L1 whose constants are the rational numbers and which otherwise is the language of linear orders. Then it takes a countable number of propositions to prove the existence of any given irrational number (i.e., exists x[1] such that x[1] < u[1], ..., exists y[1] such that y[1] > v[1], ..., x[1] = y[1], ... x[1] = x[2], ..., where the sequences u[n] and v[n] are strict upper and lower bounding sequences on the real number in question).
Now fix a language L2 whose constants are the real numbers. It now requires one proposition to prove the existence of any given irrational number (i.e., exists x such that x = c).
The difference between this ill-defined measure of information and Kolomogrov complexity is that Turing Machines are inherently countable, and the languages and models of model theory need not be.
(Disclaimer: paper-machine2011 knew far more about mathematical logic than paper-machine2016 does.)
Replies from: Jiro↑ comment by Jiro · 2016-02-04T18:29:27.788Z · LW(p) · GW(p)
lets pretend "number of bits needed to describe X" means "log2 the length of the shortest theory that proves the existence of X."
Whether a theory proves the existence of X may be an undecideable question.
Replies from: gjm↑ comment by gjm · 2016-02-04T23:24:14.015Z · LW(p) · GW(p)
How many bits it takes to describe X is an undecidable question when defined in other ways, too.
Replies from: Jiro↑ comment by bogdanb · 2011-05-13T15:11:52.139Z · LW(p) · GW(p)
Sorry for what might be a silly question, but what do you mean by “generic real number”? In the sense of “one number picked at random from the set”, a “generic natural number” would also be a huge and impossible to encounter—almost all natural numbers would need more bits then are Plank volumes in the universe to represent—and it doesn’t seem that you’re trying to say that.
Replies from: Johnicholas↑ comment by Johnicholas · 2011-05-16T10:58:26.646Z · LW(p) · GW(p)
If you start selecting things at random, then you need a probability distribution. Many routinely used probability distributions over the natural numbers give you a nontrivial chance of being able to fit the number on your computer.
There are, of course, corresponding probability distributions over the reals (take a probability distribution over the natural numbers and give zero probability to anything else). However, the routinely used probability distributions on the reals give zero probability to the output being a natural number, a rational number, describable with a finite algebraic equation, or in fact, being able to fit the number on your computer.
One of the problems with real numbers is that if someone trying to do Bayesian analysis of a sensor that reads 2.000..., or 3.14159... using one of these real number distributions as their prior, cannot conclude that the quantity measured probably is 2 or pi, no matter how many digits of precision the sensor goes out to.
Replies from: bogdanb↑ comment by bogdanb · 2011-05-28T22:59:03.458Z · LW(p) · GW(p)
I get that the sensor thing was only an example, but still: it doesn’t seem like a real objection. I mean, you’re not going to have (or need) a sensor with infinity decimals of precision. (Or perhaps I’m not understanding you?)
In terms of “selecting things at random”, for any practical use I can think of you’ll be selecting things like intervals, not real numbers. I don’t quite see how that’s relevant to the formalism you use to reason about how and what you’re calculating.
I think there’s some big understanding gap here. Could you explain (or just point to some standard text), how does one reason about trivial things like areas of circles and volumes of spheres without using reals?
Replies from: Johnicholas↑ comment by Johnicholas · 2011-05-31T11:20:55.361Z · LW(p) · GW(p)
Perhaps you've confused the "pi has a decimal expansion goes on forever without seeming pattern" with "a generic real number has a decimal expansion that goes on forever without pattern"? Pi does have a finite representation, "Pi". We use it all the time, and it's just as precise as "2" or "0.5".
Specifically, you could start with the rationals, and complete it by including all solutions to differential equations. Pi would be included, and many other numbers, but you'd still only have a countable set - because every number would have one or more shortest definitions - finite information content.
If you had a probability distribution over such a set, it would naturally favor hypotheses with short definitions. If it started out including pi as a possibility, and you gathered sufficient sensor data consistent with pi (a finite amount), the probability distribution would give pi as the best hypothesis. This is reasonable behavior. You have to do non-obvious mucking around with your prior to get that sort of reasonableness with standard real-number probabilities.
As others have pointed out, any specific countable system of numbers (such as the "solutions to differential equations" that I mentioned) is susceptible to diagonalization - but I see no reason to "complete" the process, as if being susceptible to diagonalization were a terrible flaw above all others. All the entities that you're actually manipulating (finite precision data and finite, stateable hypotheses like "exactly 2" and "exactly pi") are finite-information-content, and "completing" the reals against diagonalization makes essentially all the reals infinite-information-content - a cure in my mind far worse than the disease.
Replies from: bogdanb↑ comment by bogdanb · 2011-06-04T09:19:34.663Z · LW(p) · GW(p)
(Note: I’m not arguing in this particular post, just asking clarifying questions, as you seem to have the issues much clearer in your mind than I do.)
1) It seems one can start with naturals, extend them to integers, then to rationals, then to whatever set results from including solutions to differential equations (does that have a standard name?). I imagine there are countably infinite many constructions like that, am I right? They seem to “divide” the numbers “finer” (I’d welcome a hint to more formal description of this), though they aren’t necessarily totally ordered in terms of how “fine” they are, and that the limit of this process after an infinity of extensions seems to be the reals. (Am I missing something important until here? In particular, we can reach the reals much faster, is there some important property in particular the countable extensions have in general, other than their result set being countable and their individual structure?)
2) Do you have other objections to real numbers that do not involve probabilities, probability distributions, and similar information theory concepts?
3) I don’t quite grok your π example. It seems to me that a finite amount of sensor data will always only be able to tell you it’s consistent with all values in the interval π±ε; if you’re using a sufficiently “dense” set, even just the rationals, you’ll have an infinity of values in that interval, while using the reals you’ll have an uncountable one. In the countable case you’ll have to have probabilities for the countable infinity of consistent values, which could result in π being the most probable one, and in the uncountable one you’ll need a probability distribution function, which could as well have π as the most probable. (In particular, I can’t see a reason why you couldn’t find a the probability distribution function that has exactly the same value as your probability function when applied to the values in your π-containing countably-infinite set and is “well-behaved” in some sense on the reals between them; but I’m likely to miss something here.)
I sort-of get that picking π in a countable set can be a finite-information operation and an infinite-information one in an uncountable set (though I’m not quite clear if or why that works on sets at least as “finely divided” as the rationals). But that seems to be a trick of picking the right countable set to contain the value you’re looking for:
If you started estimating π (let’s say, the ratio of diameter to circumference in an euclidean universe) with, say, just the rationals, you may or may not get a “most likely” hypothesis, but it wouldn’t be π; you’ll only estimate that one if you happened to start with a set that contained it. And if you use a set that contains π, there would always be some kind of other number that fits in a “finer”-but-countable set you aren’t using that you might need to estimate (assuming there’s a lot of such sets as I speculate in point 1 above).
Of course, using the reals doesn’t save you from that: you still need an infinite amount of information to find an arbitrary real. But by using probability distributions—even if you construct them by picking a probability function on a countable set and then extending it to the reals somehow—it forces you to think about the parts outside that countable set (i.e., other even “finer” countable sets). In a way, this feels like reminding you of things you didn’t think of.
OK, what am I missing?
Replies from: Johnicholas↑ comment by Johnicholas · 2011-06-04T16:57:18.038Z · LW(p) · GW(p)
1) Yes, there are countably many constructions of various kinds of numbers. The construction can presumably be written down, and strings are finite-information-content entities. Yes, they're normally understood to form a set-theoretic lattice - the integers are a subset of the gaussian integers, and the integers are a subset of the rationals, and both the gaussian and rationals are a subset of the complex plane.
However, the reals are not in any well-defined sense "the" limit of that lattice - you could create a contrived argument that they are, but you could also create an argument that the natural limit is something else, either stopping sooner, or continuing further to include infinities and infinitiesimals or (salient to mathematicians) the complex plane.
Defenders of the reals as a natural concept will use the phrase "the complete ordered field", but when you examine the definition of "complete" they are referencing, it uses a significant amount of set theory (and an ugly Dedekind cuts construction) to include everything that it wants to include, and exclude many other things that might seem to be included.
2) Yes. I think the reals are a history-laden concept; they were built in fear of set-theoretic and calculus paradoxes, and without awareness of the computational approach - information theory and Godel's incompleteness. They are useful primarily in the way that C++ is useful to C++ programmers - as a thicket or swamp of complexity that defends the residents from attack. Any mathematician doing useful work in a statistical, calculus-related, or topological field who casually uses the reals will need someone else, a numerical analyst or computer scientist, to laboriously go through their work and take out the reals, replacing them with a computable (and countable) alternative notion - different notions for different results. Often, this effort is neglected, and people use IEEE floats where the mathematician said "real", and get ridiculous results - or worse, dangerously reasonable results.
3) You're right that the finite amount of sensor data will only say it is consistent with this interval. As you point out, if there are an uncountable set within that interval, then it's entirely possible for there to be no single value that is a maximum of the probability distribution function. (That's an excellent example of some of the ridiculous circumlocutions that come from using uncountable sets, when you actually want the system to come up with one or a few best hypotheses, each of which is stateable.)
Pi is always a finite-information entity. Everything nameable is. It doesn't become an infinitely large in information content just because you consider it as an element of the reals.
Yes - if you use a probability distribution over the rationals as your prior, and the actual value is irrational, then you can get bad results. I think this is a serious problem, and we should think hard about what bayesian updating with misspecified models looks like (I know Cosma Shalizi has done some work on this), so that we have some idea what failure looks like. We should also think carefully about what we would consider to be a reasonable hypothesis, one that we might eventually come to rest on.
However, it's a false fork to argue "We wouldn't use the rationals therefore we should use the reals". As I've been trying to say, the reals are a particular, large, complicated, and deeply historical construction, and we should not expect to encounter them "out in the world".
Andrej Bauer has implemented actual real number arithmetic (not IEEE nonsense, or "computable reals" which are interesting, but not reals). Peano integers, in his (Ocaml-based) language, RZ, would probably be five or ten lines. (Commutative groups are 13 lines). In contrast, building from commutative groups up to defining reals as sequences of nested intervals takes five pages; see the appendix: http://math.andrej.com/wp-content/uploads/2007/04/rzreals.pdf
Regarding "reminding you of things you didn't think of", I think Cosma Shalizi and Andrew Gelman have convincingly argued that Bayesian philosophy/methodology is flawed - we don't just pick a prior, collect data, do an update, and believe the results. If we were magical uncomputable beasties (Solomonoff induction), possibly that is what we would do. In the real world, there are other steps, including examining the data, including the residual errors, to see if it suggests hypotheses that weren't included in the original prior. http://www.stat.columbia.edu/~gelman/research/unpublished/philosophy.pdf
Replies from: bogdanb↑ comment by bogdanb · 2011-06-11T21:58:05.850Z · LW(p) · GW(p)
Hi John! Thank you very much for taking the time to answer at such length. The links you included were also very interesting, thanks.
I think I got a bit of insight into the original issue (way up in the comments, when I interjected in your chat with Patrick).
With respect to the points closer in this thread, it’s become more like teaching than an actual discussion. I’m much too little educated in the subject, so I could contribute mostly with questions (many inevitably naïve) rather than insights. I’ll stop here then; though I am interested, I’m not interested enough right now to educate myself, so I won’t impose on your time any longer.
(That is, not unless you want to. I can continue if for some reason you’d take pleasure in educating me further.)
Thank you again for sharing your thoughts :-)
comment by mathusalem · 2009-03-18T14:56:11.000Z · LW(p) · GW(p)
these posts are useful to calibrate the commitment and self incentive biases. based on the probabilities espoused (80%, bad outcomes are 'exotic') i say the impact is 1000x. the world looks pretty utopian from the a/c cooled academics labs in US in anno domini 2009.