Posts

If it were morally correct to kill everyone on earth, would you do it? 2013-01-30T23:58:28.631Z

Comments

Comment by Bundle_Gerbe on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-12T02:04:49.879Z · LW · GW

The model seems not far off estimating peak hospitalization date, at least for states that are currently peaking like CA and NY. The peaks in places that are close to peaking can be pretty accurately estimated just with curve fitting though, I assume that being fit to past data is why the model works OK for this.

It's clearly overly optimistic about the rate of drop-off after the peak in deaths, at least in some cases. Look at Spain and Italy. Right now here's how they look:

Italy: graph shows 610 deaths on April 9. Predicts 335 on April 10, 281 on April 11. Actual is 570 on April 10, 619 on April 11.

Spain: graph shows 683 on April 8, Predicts 372, 304, 262 on next three days. Actual 655, 634, 525.


The model for New York says deaths will be down to 48, 6% of the peak, in 15 days. Italy is 15 days from it's peak of 919 and is only down to 619, 67% of the peak.

The model for the US as a whole is a little less obviously over-optimistic, assuming the peak really was April 10. it's only predicting 40% decline in the next 15 days. California model predicts an even slower decline. It seems to think fast growth in cases in the outbreak phase leads to fast recovery, which has not been borne out thus far in Italy and Spain.

Comment by Bundle_Gerbe on Taking Initial Viral Load Seriously · 2020-04-02T22:24:34.192Z · LW · GW
We have to ask why smallpox was a unique event, and we never used this method for any other virus. Did we even ever consider it?

There are two strains of smallpox, one of which is much less deadly than the other. People practicing variolation tended to use variolous material from a mild cases, including those successfully variolated. Some of the success of smallpox variolation was probably due to this practice and the resulting tendency for the inoculations to contain variola minor.

Comment by Bundle_Gerbe on What are the most common and important trade-offs that decision makers face? · 2014-11-05T10:35:05.647Z · LW · GW

How about:

Specialization of Labor vs. Transaction/Communication costs: a trade off between having a task split between multiple people/organizations vs. done by a single person. Generalism vs.Specialization might be a more succinct way to put it.

Also, another pair that has a close connection is 3 and 7. Exploration is flexible strategy, since it leaves open resources to exploit better opportunities that turn up, while exploitation gains in commitment.

Comment by Bundle_Gerbe on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-28T07:36:22.863Z · LW · GW

As someone with a Ph.D. in math, I tend to think verbally in as much as I have words attached to the concepts I'm thinking about, but I never go so far as to internally vocalize the steps of the logic I'm following until I'm at the point of actually writing something down.

I think there is another much stronger distinction in mathematical thinking, which is formal vs. informal. This isn't the same distinction as verbal vs. nonverbal, for instance, formal thinking can involve manipulation of symbols and equations in addition to definitions and theorems, and I often do informal thinking by coming up with pretty explicitly verbal stories for what a theorem or definition means (though pictures are helpful too).

I personally lean heavily towards informal thinking, and I'd say that trying to come up with a story or picture for what each theorem or definition means as you are reading will help you a lot. This can be very hard sometimes. If you open a book or paper and aren't able to get anywhere when you try do this to the first chapter, it's a good sign that you are reading something too difficult for your current understanding of that particular field. At a high level of mastery of a particular subject, you can turn informal thinking into proofs and theorems, but the first step is to be able to create stories and pictures out of the theorems, proofs, and definitions you are reading.

Comment by Bundle_Gerbe on Rationality Quotes December 2013 · 2013-12-04T02:47:36.479Z · LW · GW

Interestingly, advertiser, lawyers, and financial traders all have in common that they are agents who play zero-sum or almost zero-sum games on behalf of someone. People who represent big interests in these games are compensated well, because of the logic of the game: so much is at stake that you want to have the best person representing you, so these people's services are bid up. But there is still the feeling that the game is wasteful, though perhaps unavoidably so.

Also, problematically for first sentence, I don't think many people would necessarily come up with the four professions named, especially "advertiser" and "salesperson", if asked to name the most important professions in the modern world, and some important professions, like "scientist", are widely valorized, while others, like "engineer", are at the least not reviled.

Comment by Bundle_Gerbe on Rationality Quotes November 2013 · 2013-11-01T11:26:39.800Z · LW · GW

The theme of this book, then, must be the coming to consciousness of uncertain inference. The topic may be compared to, say, the history of visual perspective. Everyone can see in perspective, but it has been a difficult and long-drawn-out effort of humankind to become aware of the principles of perspective in order to take advantage of them and imitate nature. So it is with probability. Everyone can act so as to take a rough account of risk, but understanding the principles of probability and using them to improve performance is an immense task.

James Franklin, The Science of Conjecture: Evidence and Probability before Pascal

Comment by Bundle_Gerbe on Why officers vs. enlisted? · 2013-10-31T20:00:59.064Z · LW · GW

Well, just because the rule doesn't by itself prevent all possible cases of inappropriate cross-rank fraternization doesn't mean it has no value. There are other norms and practices that discourage generals from hanging out with lieutenants, e.g. generals usually get fancy lodging separate from the lieutenants. I suspect that cutting off lower-ranking officers from fraternizing with enlisted men prevents what would otherwise be one of the more common problematic cases.

If the military were even more concerned with this problem, it could have three or more groups instead of two, say, enlisted, officers and super-officers. But there are also tradeoffs to having more groupings, so the military sticks with two (part of this might be historically contingent, maybe three groups would work just as well but everyone is just copying the consensus choice of two).

Comment by Bundle_Gerbe on Why officers vs. enlisted? · 2013-10-31T19:11:16.947Z · LW · GW

I think that in the military, the "no fraternizing with enlisted personnel" rule might be one reason why a hard separation is useful. This kind of rule requires a cutoff and can't easily be replaced with a rule like "no fraternizing with people of a rank three or more below your own." For instance, how would you set up the housing arrangements? Also, promotions would be awkward under this system, since you would always have a group of people you previously could fraternize with but no longer can.

Comment by Bundle_Gerbe on What did governments get right? Gotta list them all! · 2013-09-20T01:39:18.784Z · LW · GW

I think the containment of the SARS epidemic in 2003 is a under-appreciated success story. SARS spread fairly easily and had a 9% mortality rate, so it could well have killed millions, but it was contained thanks to the WHO and to the quarantine efforts of various governments. Their wasn't much coverage in the vein of "hooray! one of the worst catastrophes in human history has been averted!" afterwards.

Comment by Bundle_Gerbe on If it were morally correct to kill everyone on earth, would you do it? · 2013-01-31T22:45:53.610Z · LW · GW

"But the sacrifice is too great" is a relevant argument, you think that "Yeah doing Y is right" is potentially mistaken.

I think I disagree with this. On a social and political level, the tendency to rationalize is so pervasive it would sound completely absurd to say "I agree that it would be morally correct to implement your policy but I advocate not doing it, because it will only help future generations, screw those guys." In practice, when people attempt to motivate each other in the political sphere to do something, it is always accompanied by the claim that doing that thing is morally right. But it is in principle possible to try to get people not to do something by arguing "hey this is really bad for us!" without arguing against it's moral rightness. This thought experiment is a case where this exact "lets grab the banana" position is supposed to be tempting.

Comment by Bundle_Gerbe on If it were morally correct to kill everyone on earth, would you do it? · 2013-01-31T10:18:05.724Z · LW · GW

Thanks for this response. One comment about one of your main points: I agree that the tradeoff of number of humans vs. length of life is ambiguous. But to the extent our utility function favors numbers of people over total life span, that makes the second scenario more plausible, whereas if total life span is more important, the first is more plausible.

I agree with you that both the scenarios would be totally unacceptable to me personally, because of my limited altruism. I would badly want to stop it from happening, and I would oppose creating any AI that did it. But I disagree in that I can't say that any such AI is unfriendly or "evil". Maybe if I was less egoistic, and had a better capacity to understand the consequences, I really would feel the sacrifice was worth it.

Comment by Bundle_Gerbe on If it were morally correct to kill everyone on earth, would you do it? · 2013-01-31T09:52:30.059Z · LW · GW

I don't think that's what I'm asking. Here's an analogy. A person X comes to the conclusion fairly late in life that the morally best thing they can think of to do is to kill themselves in a way that looks like an accident and will their sizable life insurance policy to charity. This conclusion isn't a reducto ad absurdum of X's moral philosophy, even if X doesn't like it. Regardless of this particular example, it could presumably be correct for a person to sacrifice themselves in a way that doesn't feel heroic, isn't socially accepted, and doesn't save the whole world but maybe only a few far-away people. I think most people in such a situation (who managed not to rationalize the dilemma away) would probably not do it.

So I'm trying to envision the same situation for humanity as a whole. Is there any situation that humanity could face that would make us collectively say "Yeah doing Y is right, even though it seems bad for us. But the sacrifice is too great, we aren't going to do it". That is, if there's room for space between "considered morality" and "desires" for an individual, is there room for space between them for a species?

Comment by Bundle_Gerbe on Causal Reference · 2012-10-23T21:58:04.029Z · LW · GW

Hmm, you are right. Thanks for the correction!

Comment by Bundle_Gerbe on Causal Reference · 2012-10-21T23:57:12.662Z · LW · GW

I think this example brings out how Pearlian causality differs from other causal theories. For instance, in a counterfactual theory of causation, since the negation of a mathematical truth is impossible, we can't meaningfully think of them as causes.

But in the Pearlian causality it seems that mathematical statements can have causal relations, since we can factor our uncertainty about them, just as we can other statements. I think endoself's comment argues this well. I would add that this is a good example of how causation can be subjective. Before 1984, the Taniyama-Shimura-Weil conjecture and Fermat's last theorem existed as conjectures, and some mathematicians presumably knew about both, but as far as I know they had no clue that they were related. Then Frey conjectured and Ribet proved that the TSW conjecture implies FLT. Then mathematician's uncertainty was such that they would have causal graphs with TSW causing FLT. Now we have a proof of TSW (mostly by Wiles) but any residual uncertainty is still correlated. In the future, maybe there will be many independent proofs of each, and whatever uncertainty is left about them will be (nearly) uncorrelated.

I also think there can be causal relations between mathematical statements and statements about the world. For instance, maybe there is some conjecture of fluid dynamics, which if true would cause us to believe a certain type of wave can occur in certain circumstances. We can make inferences both ways, for instance, if we observe the wave we might increase our credence in the conjecture, and if we prove the conjecture, we might believe the wave can be observed somehow. But it seems that the causal graph would have the conjecture causing the wave. Part of the graph would be:

[̶P̶r̶o̶o̶f̶ ̶o̶f̶ ̶c̶o̶n̶j̶e̶c̶t̶u̶r̶e̶ ̶-̶>̶ ̶c̶o̶n̶j̶e̶c̶t̶u̶r̶e̶ ̶-̶>̶ ̶w̶a̶v̶e̶ ̶<̶-̶ ̶(̶f̶l̶u̶i̶d̶ ̶d̶y̶n̶a̶m̶i̶c̶s̶ ̶a̶p̶p̶l̶i̶e̶s̶ ̶t̶o̶ ̶w̶a̶t̶e̶r̶)̶ ̶]̶

[Proof of conjecture <- conjecture -> wave <- (fluid dynamics applies to water) ]

Comment by Bundle_Gerbe on The Fabric of Real Things · 2012-10-16T20:55:11.321Z · LW · GW

"Actually" isn't intended in any sense except emphasis and to express that Eliezer's view is contrary to my expectations (for instance, "I thought it was a worm, but it was actually a small snake").

Eliezer does seem to be endorsing the statement that "everything is made of causes and effects", but I am unsure of his exact position. The maximalist interpretation of this would be, "in the correct complete theory of everything, I expect that causation will be basic, one of the things to which other laws are reduced. It will not be the case that causation is explained in terms of laws that make no mention of causation". This view I strongly disagree with, not least because I generally think something has gone wrong with one's philosophy if it predicts something about fundamental physics (like Kant's a priori deduction that the universe is Euclidean).

I suspect this is not Eliezer's position, though I am unsure because of his "Timeless Physics" post, which I disagree with (as I lean towards four-dimensionalism) but which seems consonant with the above position in that both are consistent with time being non-fundamental. If he means something weaker, though, I don't know what it is.

Comment by Bundle_Gerbe on The Fabric of Real Things · 2012-10-16T20:03:11.735Z · LW · GW

Imagine a universe that is made only of ideal billiard balls eternally bouncing around on a frictionless, pocketless billiard table. Essentially the same thing as selylindi's idea of a gas in thermodynamic equilibrium. Imagine yourself observing this universe as a timeless observer, or to aid the imagination, that it's "time" dimension is correlated to our space dimension, so we see the system as an infinite frozen solid, 11 by 6 by infinity, with the balls represented by solid streaks inside that go in straight lines except where they bounce off each other or the boundary of the solid.

Now, this system internally has a timelike dimension, except without increasing or decreasing entropy. And the physics of the system are completely reversible, so we have no basis of saying which way is the "future" and which way is the "past" in this system. We can equally well say a collision at one time is "caused" by the positions of the balls one second in the "past" or one second in the "future". There is no basis for choosing a direction of causality between two events.

In our universe, our time is microscopically reversible but macroscopically irreversible, because of the fact that the universe is proceeding from a low entropy state (we call that direction the "past") to a high entropy state (the "future"). I am curious, can anyone coherently describe a universe with nothing similar to irreversible time, but with a useful notion of causation? Or with something like irreversible time, but no causation whatsoever? I have tried (for much more than five minutes!) and not succeeded , but I am still far from sure that it's impossible to do. It might be too much to ask to imagine actually being in a universe without causation or time, but perhaps we can think of how such a universe could look from the outside.

Comment by Bundle_Gerbe on The Fabric of Real Things · 2012-10-12T21:32:44.987Z · LW · GW

I am confused by these posts. On one hand, Eliezer argues for an account of causality in terms of probability, which as we know are subjective degrees of belief. So we should be able to read off whether X thinks A causes B from looking at conditional probabilities in X's map.

But on the other hand, he suggests (not completely sure this is his view from the article) that the universe is actually made of cause and effect. I would think that the former argument instead suggests causality is "subjectively objective". Just as with probability, causality is fundamentally an epistemic relation between me and the universe, despite the fact that there can be widespread agreement on whether A causes B. Of course, I can't avoid cancer by deciding "smoking doesn't cause cancer", just as I can't win the lottery by deciding that my probability of winning it is .9.

For instance, how would an omniscient agent decide if A causes B according Eliezer's account of Pearl? I don't think they would be able to, except maybe in cases where they could count frequencies as a substitute for using probabilities.

Comment by Bundle_Gerbe on The Useful Idea of Truth · 2012-10-04T13:45:26.801Z · LW · GW

Consider "Elaine is a post-utopian and the Earth is round" This statement is meaningless, at least in the case where the Earth is round, where it is equivalent to "Elaine is a post-utopian." Yet it does constrain my experience, because observing that the Earth is flat falsifies it. If something like this came to seem like a natural proposition to consider, I think it would be hard to notice it was (partly) meaningless, since I could still notice it being updated.

This seems to defeat many suggestions people have made so far. I guess we could say it's not a real counterexample, because the statement is still "partly meaningful". But in that case it would be still be nice if we could say what "partly meaningful" means. I think that the situation often arises that a concept or belief people throw around has a lot of useless conceptual baggage that doesn't track anything in the real world, yet doesn't completely fail to constrain reality (I'd put phlogiston and possibly some literary criticism concepts in this category).

My first attempt is to say that a belief A of X is meaningful to the extent that it (is contained in / has an analog in / is resolved by) the most parsimonious model of the universe which makes all predictions about direct observations that X would make.

Comment by Bundle_Gerbe on The Useful Idea of Truth · 2012-10-04T12:47:38.421Z · LW · GW

Your view reminds me of Quine's "web of belief" view as expressed in "Two Dogmas of Empiricism" section 6:

The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. Truth values have to be redistributed over some of our statements. Reevaluation of some statements entails reevaluation of others, because of their logical interconnections--the logical laws being in turn simply certain further statements of the system, certain further elements of the field.

Quine doesn't use Bayesian epistemology, unfortunately because I think it would have helped him clarify and refine his view.

One way to try to flesh this intuition out is to say that some beliefs are meaningful by virtue of being subject to revision by experience (i.e. they directly pay rent), while others are meaningful by virtue of being epistemically entangled with beliefs that pay rent (in the sense of not being independent beliefs in the probabilistic sense). But that seems to fail because any belief connected to a belief that directly pays rent must itself be subject to revision by experience, at least to some extent, since if A is entangled with B, an observation which revises P(A) typically revises P(B), however slightly.

Comment by Bundle_Gerbe on The Crackpot Offer · 2012-09-21T13:20:27.343Z · LW · GW

No 2^alpeh0=aleph1 is the continuum hypothesis, which is independent of the standard axioms of math, and can't be proven. I think maybe you mean he was close to showing 2^aleph0 is the cardinality of the reals, but I think he knew this already and was trying to use it as the basis of the proof.

Making mistakes like Eliezer's is a big part of learning math though, if we are looking for a silver lining. When you prove something you know is wrong, usually it's because of some misunderstanding or incomplete understanding, and not because of some trivial error. I think the diagonal argument seems like some stupid syntactical trick the first time you hear it, but the concept is far-reaching. Surely Eliezer came away with a bit better understanding of its implications after he straightened himself out.

Comment by Bundle_Gerbe on Game Theory As A Dark Art · 2012-07-25T20:25:30.961Z · LW · GW

The specific problem with calling the last game a "prisoner's dilemma" is that someone learning about game theory from this article may well remember from it, "there is a cool way to coordinate on the prisoner's dilemma using coin flips based on correlated equilibria" then be seriously confused at some later point.

Comment by Bundle_Gerbe on Game Theory As A Dark Art · 2012-07-24T21:59:21.035Z · LW · GW

Against typical human opponents it is not rational to join dollar auctions either as the second player or as the first, because of the known typical behavior of humans in this game.

The equilibrium strategy however is a mixed strategy, in which you pick the maximum bid you are willing to make at random from a certain distribution that has different weights for different maximum bids. If you use a the right formula, your opponents won't have any better choice than mirroring you, and you will all have an expected payout of zero.

Comment by Bundle_Gerbe on Game Theory As A Dark Art · 2012-07-24T19:18:48.137Z · LW · GW

For the dollar auction, note that according to the wikipedia page for All-pay auction the auction does have a mixed-strategy Nash equilibrium in which the players' expected payoff is zero and the auctioneer's expected revenue is $20. So this breaks the pattern of the other examples, which show how Nash equilibrium game theory can be exploited to devious ends when facing rational maximizers.

The dollar auction is an interesting counterpoint to games like the Traveler's Dilemma in which the game when played by humans reaches outcomes much better than the Nash equilibrium. The Dollar Auction is an example where the outcomes are much worse when played by humans.

It seems humans in the Traveler's Dilemma and the Dollar Auction fail to reach the Nash equilibrium for an entirely different reason than in, say, the Prisoner's Dilemma or the Ultimatum Game (in which altruism/fairness are the main factors). In both cases, understanding the game requires iterative thinking, and there's a sort of "almost-equilibrium" different from the Nash equilibrium when this thinking isn't fully applied.

Comment by Bundle_Gerbe on Game Theory As A Dark Art · 2012-07-24T17:54:46.099Z · LW · GW

Calling the last game a "Prisoner's Dilemma" is little misleading in this context as the critical difference from the standard Prisoner's Dilemma (the fact that the payoff for (C,D) is the same as for (D,D)) is exactly what makes cousin_it 's (and Nick's) solution work. A small incentive to defect if you know your opponent is defecting defeats a strategy based on committing to defect.

Comment by Bundle_Gerbe on Evolutionary psychology as "the truth-killer" · 2012-07-23T22:33:22.205Z · LW · GW

I think evolutionary psychology is pretty far from the crux of the theism/atheism debate.

On one hand, I don't think evolutionary psych at the moment provides very strong evidence against god. It's true that if god doesn't exist, there is probably some evolutionary explanation for widespread cross-cultural religious belief, and if god does exist, there might not be. But evolutionary psychology so far only really has educated guesses for why religious belief might be so common, without knock-down proof that any of them are the true reason. The existence of such guesses seems to me pretty close to equally likely under either hypothesis. These guesses are a good argument however against the opposite claim that widespread belief in religion proves god exists.

As for the Keller argument you mentioned, it begs the question by suggesting that the evolutionary psychology argument is a last ditch effort to subvert a conclusion that would otherwise be nearly unavoidable for our god-believing brains. But if we do have some inherent tendency to give credence to the idea of god, it's not all that strong, for instance the non-existence of god is much less surprising to our intuitions than the fact that color categories are perceptually constructed. The argument is better left at the object-level of directly giving reasons for and against god, instead of arguing for and against the reliability of certain weak human intuitions.

Comment by Bundle_Gerbe on Welcome to Less Wrong! (2012) · 2012-07-23T02:11:57.469Z · LW · GW

It does not sound to me like you need more training in specific Christian arguments to stay sane. You have already figured things out despite being brought up in a situation that massively tilted the scales in favor of christianity. I doubt there is any chance they could now convince you if they had to fight on a level field. After all, it's not like they've been holding back their best arguments this whole time.

But you are going to be in a situation where they apply intense social pressure and reinforcement towards converting you. On top of that, I'm guessing maintaining your unbelief is very practically inconvenient right now, especially for your relationship with your dad. These conditions are hazardous to rationality, more than any argument they can give. You have to do what MixedNuts says. Just remember you will consider anything they say later, when you have room to think.

I do not think they will convert you. I doubt they will be able to brainwash you in a week when you are determined to resist. Even if they could, you managed to think your way out of christian indoctrination once already, you can do it again.

If you want to learn more about rationality specific to the question of Christianity, given that you've already read a good amount of material here about rationality in general, you might gain the most from reading atheist sites, which tend to spend a lot of effort specifically on refuting Christianity. Learn more about the Bible from skeptical sources, if you haven't before you'll be pretty amazed how much of what you've been told is blatantly false and how much about the bible you don't know (for instance, Genesis 1&2 have different creation stories that are quite contradictory, and the gospels' versions of the resurrection are impossible to reconcile. Also, the gospels of Matthew and Luke are largely copied from Mark, and the entire resurrection story is missing from the earliest versions of Mark.) I unfortunately don't know a source that gives a good introduction to bible scholarship. Maybe someone else can suggest one?

Comment by Bundle_Gerbe on The Best Textbooks on Every Subject · 2012-07-20T16:13:00.024Z · LW · GW

For abstract algebra I recommend Dummit and Foote's Abstract Algebra over Lang's Algebra, Hungerford's Algebra, and Herstein's Topics in Algebra. Dummit and Foote is clearly written and covers a great deal of material while being accessible to someone studying the subject for the first time. It does a good job focusing on the most important topics for modern math, giving a pretty broad overview without going too deep on any one topic. It has many good exercises at varying difficulties.

Lang is not a bad book but is not introductory. It covers a huge amount but is hard to read and has difficult exercises. Someone new to algebra will learn faster and with less frustration from a less advanced book. Hungerford is awful; it is less clear, less modern, harder, and covers less material than Dummit and Foote. Herstein is ok but too old fashioned and narrow, and has too much focus on finite group theory. The part about Galois theory is good though, as are the exercises.

Comment by Bundle_Gerbe on What are you counting? · 2012-07-18T16:59:29.447Z · LW · GW

In the strictest sense, "adding" sheep is a category error. Sheep are physical objects, you can put two sheep in a pen or imagine putting two sheep in a pen, but you aren't "adding" them, that's for numbers. Arithmetic is merely a map that can be fruitfully used to model (among many other things) certain aspects of sheep collection, separating sheep into groups, etc, under certain circumstances. When mathematical maps work especially well, they risk being confused with the territory, which is what I think is going on here. The "female sheep + male sheep" example should be thought of as an aspect of "putting sheep in pens for long periods of time" which addition does not model, not as an exception to "1+1=2".

Comment by Bundle_Gerbe on The Creating Bob the Jerk problem. Is it a real problem in decision theory? · 2012-06-15T02:28:42.275Z · LW · GW

We need a sense in which Bob is "just as likely to have existed" as I am, otherwise, it isn't a fair trade.

First considering the case before Omega's machine is introduced. The information necessary to create Bob contains the information necessary to create me, since Bob is specified as a person who would specifically create me, and not anyone else who might also make him. Add to that all the additional information necessary to specify Bob as a person, and surely Bob is much less likely to have existed than I am, if this phrase can be given any meaning. This saves us from being obligated to tile the universe with hypothetical people who would have created us.

With Omega's machine, we also imagine Bob as having run across such a machine, so he doesn't have to contain the same information anymore. Still Bob has the specific characteristic of having a somewhat unusual response to the "Bob the jerk" problem, which might make him less likely to have existed. So this case is less clear, but it still doesn't seem like a fair trade.

To give a specific sense for "just as likely to have existed," imagine Prometheus drew up two complete plans for people to create, one for Bob and one for you, then flipped a coin to decide which one to create, which turned out to be you. Now that you exists, Prometheus lets you choose whether to also create Bob. Again lets say Bob would have created you, if and only if he thought you would create him. In this case we can eliminate Bob, since it's really the same if Prometheus just says "I flipped heads and just created you. But if I flipped tails, I would have created you if and only if I thought you would give me 100 bucks. So give me 100 bucks." (The only difference is in the case with Bob, Prometheus creates you indirectly by creating Bob who then chooses to create you). The only difference between this and Pascal's mugging is that your reward in the counterfactual case is existence. I can't think of any reason (other than a sense of obligation) to choose differently in this problem than you do in Pascal's mugging.

Finally, imagine the same situation with Prometheus, but let's say Bob isn't a real jerk, just really annoying and smells bad. He also finds you annoying and malodorous. You are worse off if he exists. But Prometheus tells you Bob would have created you if Prometheus had flipped tails. Do you create Bob? It's sort of a counterfactual prisoner's dilemma.

Comment by Bundle_Gerbe on Problematic Problems for TDT · 2012-05-30T21:27:02.025Z · LW · GW

To draw out the analogy to Godelian incompleteness, any computable decision theory is subject to the suggested attack of being given a "Godel problem'' like problem 1, just as any computable set of axioms for arithmetic has a Godel sentence. You can always make a new decision theory TDT' that is TDT+ do the right thing for the Godel problem. But TDT' has it's own Godel problem of course. You can't make a computable theory that says "do the right thing for all Godel probems", if you try to do that it would not give you something computable. I'm sure this is all just restating what you had in mind, but I think it's worth spelling out.

If you have some sort of oracle for the halting problem (i.e. a hypercomputer) and Omega doesn't, he couldn't simulate you, so you would presumably be able to always win fair problems. Otherwise the best thing you could hope for is to get the right answer whenever your computation halts, but fail to halt in your computation for some problems, such as your Godel problem. (A decision theory like this can still be given a Godel problem if Omega can solve the halting problem, "I simulated you and if you fail to halt on this problem..."). I wonder if TDT fails to halt for its Godel problem, or if some natural modification of it might have this property, but I don't understand it well enough to guess.

I am less optimistic about revising "fair" to exclude Godel problems. The analogy would be proving Peano arithmetic is complete "except for things that are like Godel sentences." I don't know of any formalizations of the idea of "being a Godel sentence".