Simulating Problems
post by Andreas_Giger · 2013-01-30T13:14:28.476Z · LW · GW · Legacy · 41 commentsContents
41 comments
Apologies for the rather mathematical nature of this post, but it seems to have some implications for topics relevant to LW. Prior to posting I looked for literature on this but was unable to find any; pointers would be appreciated.
In short, my question is: How can we prove that any simulation of a problem really simulates the problem?
I want to demonstrate that this is not as obvious as it may seem by using the example of Newcomb's Problem. The issue here is of course Omega's omniscience. If we construct a simulation with the rules (payoffs) of Newcomb, an Omega that is always right, and an interface for the agent to interact with the simulation, will that be enough?
Let's say we simulate Omega's prediction by a coin toss and repeat the simulation (without payoffs) until the coin toss matches the agent's decision. This seems to adhere to all specifications of Newcomb and is (if the coin toss is hidden) in fact indistinguishable from it from the agent's perspective. However, if the agent knows how the simulation works, a CDT agent will one-box, while it is assumed that the same agent would two-box in 'real' Newcomb. Not telling the agent how the simulation works is never a solution, so this simulation appears to not actually simulate Newcomb.
Pointing out differences is of course far easier than proving that none exist. Assuming there's a problem we have no idea which decisions agents would make, and we want to build a real-world simulation to find out exactly that. How can we prove that this simulation really simulates the problem?
(Edit: Apparently it wasn't apparent that this is about problems in terms of game theory and decision theory. Newcomb, Prisoner's Dilemma, Iterated Prisoner's Dilemma, Monty Hall, Sleeping Beauty, Two Envelopes, that sort of stuff. Should be clear now.)
41 comments
Comments sorted by top scores.
comment by moridinamael · 2013-01-30T18:13:07.761Z · LW(p) · GW(p)
Perhaps I am answering a question other than the one you are asking, but: Every exercise in simulation is an exercise in evaluating which modeling concerns are relevant to the system in question, and then accounting for those factors up to a desired level of accuracy.
If you happen to be dealing with a system simple enough to be simulated exactly - and I don't know of any physical system for which this is possible - then it would be useful to talk about "proving" the correspondence between the simulation and the reality being modeled.
If you are dealing with a real system where you need to make approximations, my intuition says that the best you can do toward proving accuracy would be performing ample validations of the simulation against measured data and verifying that the simulation matches the data to within the expected tolerance.
I suspect that you and I have different concepts of what a simulation is, because you describe an agent (presumably a human being) interacting with the "simulation" in real time. In this case you are mucking up the dynamics of the simulation by introducing a factor which is not accommodated by the model, i.e. the human. The human's reasoning is influenced by knowledge from outside the simulation.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T19:00:44.044Z · LW(p) · GW(p)
I suspect that you and I have different concepts of what a simulation is, because you describe an agent (presumably a human being) interacting with the "simulation" in real time. In this case you are mucking up the dynamics of the simulation by introducing a factor which is not accommodated by the model, i.e. the human. The human's reasoning is influenced by knowledge from outside the simulation.
I didn't necessarily mean human agents. For example, this is a simulation of IPD with which non-human agents can interact with. Each step, the agents make decisions based on the current state of the simulation. If you wanted, you could have exactly the same simulation with actual humans anonymously interacting via interface terminals with a server running the simulation. On the other hand, this is a non-simulation of the same problem because it lacks actual agents that would interact with it. It's just a calculation, although an accurate one.
In general, by "simulation" I mean a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on.
Perhaps I am answering a question other than the one you are asking, but: Every exercise in simulation is an exercise in evaluating which modeling concerns are relevant to the system in question, and then accounting for those factors up to a desired level of accuracy.
That is more or less the question I am asking, and evaluating which modeling concerns are relevant to the system in question is the crucial part. But how can we be certain to have made a correct analogy or simplification? It's easy to tell this is not the case if the end results differ, but if those are what we want to learn then we need a different approach.
Is it possible to simulate Omega, for example? Like the mentioned repeated coin toss, except that we would need to prove that our simulation does in fact in all cases lead to the same decisions that an actual Omega would. Or what if we need statistically significant results from a single agent of a one-shot problem, and we can't memory-wipe the agent? Etc.
comment by [deleted] · 2013-01-31T15:10:01.850Z · LW(p) · GW(p)
It is more likely a simulation simulates X if it fails like X fails than if it fails in a different way.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T15:12:41.106Z · LW(p) · GW(p)
i'm not sure I understand what you mean by 'failing' in regards to simulations. Could you elaborate?
Replies from: None↑ comment by [deleted] · 2013-01-31T22:03:32.507Z · LW(p) · GW(p)
If a simulation of poker looses money in a way that is similar to a game of poker, it is a good simulation because it will allow for more accurate worst-case budgeting.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T22:11:15.964Z · LW(p) · GW(p)
You mean, if an agent loses money. And that's the point; if the only thing you know is that an agent loses money in a simulation of poker, how can you prove the same is true for real poker?
Replies from: None↑ comment by [deleted] · 2013-02-01T00:46:28.590Z · LW(p) · GW(p)
I think Karl Popper made the the best case that there are no final proofs, only provisional, and that the way to find the more useful provisional proofs is to note how they fail and not how they succeed. A poker simulator that can tell me accurately how much I might loose is more helpful than one that tells me how much I might win. I can budget based on the former and not the later.
If you want final proofs (models, theories, simulations) the answer is there are no scientific final proofs.
I could be wrong, or perhaps i have answered a question not asked.
comment by ESRogs · 2013-01-31T06:08:49.782Z · LW(p) · GW(p)
Let's say we simulate Omega's prediction by a coin toss and repeat the simulation (without payoffs) until the coin toss matches the agent's decision.
It's not quite clear to me what you have in mind here. Are you envisioning this with human agents or with programs? If with humans, how will they not remember that Omega got it wrong on the past run? If with programs, what's the purpose of the coin?
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T06:56:39.318Z · LW(p) · GW(p)
If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.
For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).
Replies from: ESRogscomment by Qiaochu_Yuan · 2013-01-30T18:53:26.040Z · LW(p) · GW(p)
Can you taboo "problem"?
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T19:08:09.568Z · LW(p) · GW(p)
If anything, I expected to be asked to taboo 'simulation' — by 'problem' I really just mean game theoretical problems such as Newcomb, Prisoner's Dilemma, Iterated Prisoner's Dilemma, Monty Hall, Sleeping Beauty, Two Envelopes, and so forth.
Would tabooing 'problem' really be helpful?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-30T19:13:21.789Z · LW(p) · GW(p)
It would for me! "Problem" is an extremely broad word. I would also like it if you tabooed "simulation."
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T19:47:13.093Z · LW(p) · GW(p)
In terms of game theory, 'problem' is not an extremely broad word at all, and I'm not aware of any grey areas, either. I guess you could define a game-theoretical problem as a ruleset within which agents get payoffs based on decisions they or others make. I really fail to see why you think this term that is prominently featured on LW should be tabooed.
I gave a definition for 'simulation' in another comment:
a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on
I'll taboo the term if others tell me to or upvote your comment, but at present I see no need for it.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-30T19:55:43.232Z · LW(p) · GW(p)
In terms of game theory, 'problem' is not an extremely broad word at all, and I'm not aware of any grey areas, either.
It was not obvious to me that you were talking about game-theoretic problems. "Problem" is not a word owned solely by game theorists.
a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on
It's unclear to me what you mean by this. If a problem contains elements which are impossible to construct in real life, in what sense can a practical version be said to be identical in terms of rules, interactions, results, and so on?
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T20:45:12.437Z · LW(p) · GW(p)
I have edited my top-level post to clarify what kind of problems I mean.
If a problem contains elements which are impossible to construct in real life, in what sense can a practical version be said to be identical in terms of rules, interactions, results, and so on?
For a trivial example, Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence can easily be proven because regardless of whether we allow for backwards causality and whatnot, a fair coin is always fair and even if we assume that Omega may be wrong, the probability of error must still be the same for either side of the coin, so in the end Omega is exactly as random as the coin itself no matter Omega's actual accuracy. Of course this wouldn't apply if the result of the coin toss was also relevant in some other way.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-30T20:58:24.778Z · LW(p) · GW(p)
Okay, so right now I don't understand what your question is. It sounds to me like "how can we prove that simulations are simulations?" given what I understand to be your definition of a simulation.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T21:38:23.218Z · LW(p) · GW(p)
The question is: How can I prove that all possible agents decide identically whether they're considering the simulation or the original problem?
To further illustrate the point of problem and simulation, suppose I have a tank and a bazooka and want to know whether the bazooka would make the tank blow up, but because tanks are somewhat expensive I build another, much cheaper tank lacking all parts I deem irrelevant such as tracks, crew, fire-control and so on. My model tank blows up. But how can I say with certainty that the original would blow up as well? After all, the tracks might have provided additional protection. Could I have used tracks of inferior quality for my model? Which cheaper material would have the same resistance to penetration?
Tank and bazooka are the problem, of which the tank is the impractical part that is replaced by the model tank in the simulation.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-30T21:43:11.029Z · LW(p) · GW(p)
But how can I say with certainty that the original would blow up as well?
You... can't?
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T21:53:03.784Z · LW(p) · GW(p)
This is obviously not about bazookas and tanks. If you want to know whether real tanks really blow up, you need real evidence. If you want to know whether CDT defects in PD, you don't. You can do maths just with logic and reason, und fortunately this is 100% about maths.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-30T23:20:11.451Z · LW(p) · GW(p)
You have not given me anything like a precise statement of a mathematical problem.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T23:58:35.301Z · LW(p) · GW(p)
Here you go:
Given a problem A which is impossible or impractical in real life, find a practical problem B (called simulation) with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions in analogous states.
Solve for Newcomb or other problems at will. Bonus points for finding generalized approach.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-31T00:01:00.258Z · LW(p) · GW(p)
That is not a precise statement of a mathematical problem. What do "impractical" and "practical" mean? What does "analogous" mean?
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T01:11:15.635Z · LW(p) · GW(p)
"Impractical" means that you don't want to or can't realize the problem in its original form, for example because it would be too expensive or because you don't have a prison handy and can't find any prisoner rental service.
"Practical" pretty much means the opposite, for example because it's inexpensive or because you happen to be a prison director and are not particularly bent on interpreting the law orthodoxly.
"Analogous" basically means that if you can find isomorphisms between the set of the states of problem A and the set of the states of problem B as well as between the set of decisions of problem A and the set of decisions of problem B, then each thus mapped pair of decisions or states is called analogous if analagous decisions lead to analogous states and analogous states imply analogous decisions.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-31T01:15:20.736Z · LW(p) · GW(p)
This doesn't sound like a mathematical problem, then. It's a modeling problem.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T01:39:49.489Z · LW(p) · GW(p)
"It's not a dog, it's a poodle!"
Everything you claim to not understand or misunderstand or otherwise question is either trivial or has been answered several times already, and if A is the hypothesis that you generally don't understand a whole lot of maths, and B is the hypothesis that you are deliberately being impertinent, then from my perspective p(A∨B) is getting rather close to 1.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-31T01:48:47.184Z · LW(p) · GW(p)
I am a graduate student in mathematics, and I can point you to large quantities of evidence that hypothesis A is false. I recognize that the tone of my previous comments may have been unnecessarily antagonistic, and I apologize for that, but I genuinely don't understand what question you're asking. If you don't care enough to explain it to me, that's fine, but you should take it as at least weak Bayesian evidence that other people also won't understand what question you're asking.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T02:31:16.397Z · LW(p) · GW(p)
It's not the antagonistic tone of your comments that puts me off, it's the way in which you seem to deliberately not understand things. For example my definition of analogous — what else could you possibly have expected in this context? No, don't answer that.
I genuinely don't understand what question you're asking
I believe I have said everything already, but I'll put it in a slightly different way:
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
For instance, how can we find a problem that is analogous to Newcomb, but without Omega? I have described such an analogous problem in my top-level post and demonstrated how CDT agents will in the initial state not make the analogous decision. What we're looking for is a problem in which any imaginable agent would, and we can prove it. If we believe that such a problem cannot exist without Omega, how can we prove that?
The meaning of analogous should be very clear by now. Screw practical and impractical.
As an aside note, I don't know what kind of stuff they teach at US grad schools, but what's of help here is familiarity with methods of proof and a mathematical mindset rather than mathematical knowledge, except some basic game theory and decision theory. As far as I know, what I'm trying to do here is uncharted territory.
Replies from: Qiaochu_Yuan, earthwormchuck163↑ comment by Qiaochu_Yuan · 2013-01-31T02:44:23.448Z · LW(p) · GW(p)
For example my definition of analogous — what else could you possibly have expected in this context? No, don't answer that.
The question is how close you wanted the analogy to be.
For instance, how can we find a problem that is analogous to Newcomb, but without Omega? I have described such an analogous problem in my top-level post and demonstrated how CDT agents will in the initial state not make the analogous decision. What we're looking for is a problem in which any imaginable agent would, and we can prove it. If we believe that such a problem cannot exist without Omega, how can we prove that?
Okay, this is clearer.
As an aside note, I don't know what kind of stuff they teach at US grad schools, but what's of help here is familiarity with methods of proof and a mathematical mindset rather than mathematical knowledge
I can point you to a large body of evidence that I have all of these things.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T03:02:07.436Z · LW(p) · GW(p)
The question is how close you wanted the analogy to be.
Close enough that anything we can infer from the analogous problem must apply to the original problem as well, especially concerning the decisions agents make. I thought I said that a few times.
Okay, this is clearer.
Does that imply it is actually clear? Do you have an approach for this? A way to divide the problem into smaller chunks? An idea how to tackle the issue of "any possible agent"?
↑ comment by earthwormchuck163 · 2013-01-31T02:42:18.139Z · LW(p) · GW(p)
I'll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a "rather mathematical nature", let alone a precise specification of a mathematical problem.
If you think that you are communicating clearly, then you are wrong. Try again.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T03:17:55.859Z · LW(p) · GW(p)
Nothing in your post or the proceeding discussion is of a "rather mathematical nature", let alone a precise specification of a mathematical problem.
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
You do realize that game theory is a branch of mathematics, as is decision theory? That we are trying to prove something here, not by empirical evidence, but by logic and reason alone? What do you think this is, social economics?
Replies from: earthwormchuck163, Qiaochu_Yuan↑ comment by earthwormchuck163 · 2013-01-31T03:37:07.634Z · LW(p) · GW(p)
Your question is not stated in anything like the standard terminology of game theory and decision theory. It's also not clear what you are asking on an informal level. What do you mean by "analogous"?
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T03:58:45.143Z · LW(p) · GW(p)
What do you mean by "analogous"?
I'm not surprised you don't understand what I'm asking when you don't read what I write.
Replies from: earthwormchuck163↑ comment by earthwormchuck163 · 2013-01-31T04:15:29.818Z · LW(p) · GW(p)
I did read that. It either doesn't say anything at all, or else it trivializes the problem when you unpack it.
Also, this is not worth my time. I'm out.
↑ comment by Qiaochu_Yuan · 2013-01-31T03:31:07.551Z · LW(p) · GW(p)
What you have stated is unclear enough that I can't recognize it as a problem in either game theory or decision theory, and meanwhile you are being very rude. Disincentivizing people who try to help you is not a good way to convince people to help you.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-31T04:09:13.308Z · LW(p) · GW(p)
That's because it's not strictly speaking a problem in GT/DT, it's a problem (or meta-problem if you want to call it that) about GT/DT. It's not "which decision should agent X make", but "how can we prove that problems A and B are identical."
Concerning the matter of rudeness, suppose you write a post and however many comments about a mathematical issue, only for someone who doesn't even read what you write and says he has no idea what you're talking about to conclude that you're not talking about mathematics. I find that rude.
comment by Richard_Kennaway · 2013-01-30T14:59:11.365Z · LW(p) · GW(p)
The agent in Newcomb's problem needs to know that Omega's prediction is caused by the same factors as his actual decision. The agent does not need to know any more detail than that, but does need to know at least that much. If there were no such causal path between prediction and decision then Omega would be unable to make a reliable prediction. When there is correlation, there must, somewhere, be causation (though not necessarily in the same place as the correlation).
If the agent believes that Omega is just pretending to be able to make that prediction, but really tossed a coin, and intends only publicising the cases where the agent's decision happened to be the same, then the agent has no reason to one-box.
If the agent believes Omega's story, but Omega is really tossing a coin and engaging in selective reporting, then the agent's decision may be correct on the basis of his belief, but wrong relative to the truth. Such is life.
To simulate Newcomb's problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can't.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-30T15:11:02.681Z · LW(p) · GW(p)
I only used Newcomb as an example to show that determining whether a simulation actually simulates a problem isn't trivial. The issue here is not finding particular simulations for Newcomb or other problems, but the general concept of correctly linking problems to simulations. As I said, it's a rather mathematical issue. Your last statement seems the most relevant one to me:
To simulate Newcomb's problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can't.
Can we generalize this to mean "if a problem can't exist in reality, an accurate simulation of it can't exist either" or something along those lines? Can we prove this?
Replies from: MrMind↑ comment by MrMind · 2013-02-01T13:06:38.598Z · LW(p) · GW(p)
Can we generalize this to mean "if a problem can't exist in reality, an accurate simulation of it can't exist either" or something along those lines? Can we prove this?
I would cast this sentence in this form, seeing that if a problem contains some infinite it's impossibile for it to exist in reality. Can an infinite transition system be simulated by a finite transition sistem? If there's only one which can be, then this would disprove your conjecture. The converse of course it's not true...
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-02-01T14:27:24.675Z · LW(p) · GW(p)
I'm not sure what you mean by an infinite transition system. Are you referring to circular causality such as in Newcomb, or to an actually infinite number of states such as a variant of Sleeping Beauty in which on each day the coin is tossed anew and the experiment only ends once the coin lands heads?
Regardless, I think I have already disproven the conjecture I made above in another comment:
Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence can easily be proven because regardless of whether we allow for backwards causality and whatnot, a fair coin is always fair and even if we assume that Omega may be wrong, the probability of error must still be the same for either side of the coin, so in the end Omega is exactly as random as the coin itself no matter Omega's actual accuracy.