Asymptotic Decision Theory (Improved Writeup)
post by Diffractor · 2018-09-27T05:17:03.222Z · LW · GW · 14 commentsContents
Definitions: What's ADT?: Problems with ADT: Good Properties of ADT: Why Haven't I Heard of this Before? Evil Problems and Theorem Failure: What is Fairness?: Lack-of-Proof-of-Optimality for ADT: Failed Attempt 1: Failed Attempt 2: None 14 comments
(asymptotic decision theory, initially detailed in this paper) was a proposed decision theory with logical inductors, developed after EDT with logical inductors and exploration steps. It is a possible candidate for conceptual progress in decision theory, but some basic questions about its performance are still unsettled, and there are several issues with the current implementation of it.
Definitions:
A market is a computable function with type (where is the set of sentences of math) that will be denoted as . We will be considering logical inductors in this setting, and in particular, each finite stage of a logical inductor is a market.
Let be a finite set of actions. An agent is a computable sequence of computable functions of type . It takes a timestep and a probability distribution, and selects a distribution over actions. Because logical inductors associate each timestep with an action, specifying the logical inductor and the agent specifies a sequence of actions. Because we will be assuming some fixed logical inductor in the background, we will suppress it in the notation and refer to the action produced by an agent at time t as or . Randomization can be handled by using the diagonalization sentences used to define exploration (this acts in such a way that no trader in the inductor is able to reliably predict what the agent does), or it can be handled by having one of the actions be to call a random number generator in the environment.
An embedder is a sequence of randomized functions of type (denoted by or , with the function at time t being denoted by or ). Let denote the random variable that corresponds to the environment using a uniform random distribution of bitstrings for its randomness tape. An embedder must have the probability distribution of being computably approximable on computable inputs.
An argmax agent is an agent that takes a finite set of other agents , and a single embedder , and outputs on turn . is defined as it usually is for logical inductors. Notice that, although may be very difficult to compute, putting it inside an expectation allows a logical inductor to output a decent guess as to what it is anyways. The agents all had to be computable in order for the argmax agent to duplicate their behavior at arbitrary turns. There is a dependence on the set , the embedder , and the logical inductor, but because the logical inductor and will be treated as fixed, we will write the time-t action produced by the argmax agent as .
will denote the "true environment", which is a sequence of values in .
Finally, the logical inductor will be required to have "fast feedback". This means that after each turn, the following information will show up in the deductive process at time .
1: (an interval containing) the true value of .
The deductive process is the sequence of theorems that the logical inductor sees. The reason that the true utility has to be in an interval, instead of reported directly, is because the true utility may have arbitrarily many digits, which hampers the ability of traders to use that information to judge bets on how will turn out. This condition is present to ensure that, if is generated by taking and feeding it a uniform random bitstring, .
Theorem 1: If is generated on each turn by running with a bitstring sampled uniformly at random, then .
We will apply the basic trading strategy from Theorem 4.5.9 in the logical induction paper. In this case, if the left-hand side is overpriced by infinitely often (by a failure of convergence, and the same argument can be applied symmetrically to the right-hand side being overpriced infinitely often), the traders buy a fraction of a share of and sell a fraction of a share of , and keep doing it until 1 share has been moved in total. This takes care of having -generable trade magnitudes. According to the law of large numbers, with probability 1, there is a finite time after which all of the sub-traders have their pile of shares have a value within of , so the initial traders in the efficiently emulatable sequence of traders can be clipped off, and all the other sub-traders get or more value from selling the high-priced expectation and buying the low-priced one, so the necessary conditions for the -ROI Lemma are fulfilled and the resulting trader exploits the market.
Finally, we will define dominance. An agent dominates an agent on an embedder if
This could be thought of as having sublinear regret relative to .
What's ADT?:
The asymptotic decision theory algorithm works as follows.
Inputs: A sequence of numbers asymptotically decreasing to , denoted as , a finite list of embedders , a finite list of agents , and a logical inductor with fast feedback on the true environment,
Ok, so what does this do intuitively? Well, it takes all the embedders, and runs them through a "reality filter" which checks whether the embedder, run on the agent itself, replicates what the true reality actually does, in expectation. For all the embedders that pass the reality filter, it uses the inductor to assess what score argmax gets in that embedder, takes the one that gets the highest score, and then implements argmax for that embedder. In short, it optimistically chooses the embedder where the highest score is attainable (that isn't wrong about reality), and optimizes for that. If it got a good score, that's fine. If it got a lower score than it was expecting, that embedder is less likely to pass the reality filter next time, because undershot . There isn't a problem (as there is in standard logical inductor decision theory) of systematically underscoring an embedder forever, because since an embedder is a randomized function, it's possible to actually approximate a distribution over what it outputs, so argmax will eventually catch on and start taking (predicted-to-be) optimal actions, and the value of these can also be empirically assessed.
Problems with ADT:
The most prominent undesirable feature of this is that it's restricted to a finite set of embedders. Optimistic choice fails very badly on an infinite set of embedders, because we can consider an infinite sequence of embedders that are like "pressing the button dispenses 1 utility forever", "pressing the button delivers an electric shock the first time, and then dispenses 1 utility forever"... "pressing the button delivers an electric shock for the first n times, and then dispenses 1 utility forever"... "pressing the button just always delivers an electric shock". Optimistic choice among embedders keeps pressing the button, because, although it keeps getting shocked, there's always an embedder that promises that the agent's fortunes will turn around on the next turn.
Also, this optimizes for each choice individually, and does not naturally deal with sequences of choices, which is necessary to handle general environments.
Good Properties of ADT:
A nice feature of this is that exploration steps are not required for good behavior, which is important because counterfactuals are not conditionals.
Another nice feature of this is that it gets ASP (agent simulates predictor) right, which is a surprisingly hard decision theory problem to do well on. When you are rewarded with dollars for picking action 2, and paid dollars, the best move is to just take action 1 to win the million dollars, but standard logical inductor decision theory converges to taking action 2, because the predictor isn't powerful enough to predict the rare exploration steps, so the agent will learn that action 2 always gets it more money than action 1, and dial up the probability of taking action 2 until it ends up getting a thousand dollars on each round and missing the larger payoff.
However, because the embedder that represents the true environment is plugging things like "always pick 1" and "always pick 2" into the environment, argmax on that environment will converge to copying the "always pick 1" strategy, so the logical inductor learns that argmax will always pick 1, and then the true embedder claims that dollars are attainable. If learns to use the true embedder, then it will converge to always one-boxing, which is the desired behavior.
This same feature also allows it to win on Parfit's Hitchiker and XOR Blackmail (I think, 90% probability)
This does not pay up on counterfactual mugging.
The original paper on Asymptotic Decision Theory had much more restrictive restrictions for good behavior, such as the embedders having to be continuous, all the agents in having to converge to a single distribution in the limit, all the embedders in having to converge to a single payoff in the limit when fed a convergent agent, and having to use a continuous analogue of argmax, and in combination, this meant that most of the games you could define (even "predict this sequence of bits") were outside of the class of problems where optimality is guaranteed.
Why Haven't I Heard of this Before?
Well, for quite a while, we thought it was bad because we thought it crashed into itself in games of chicken, so it got tabled for a while. I'll now go over why the argument in that post is false.
To begin, note that there's a crucial difference between the opponent in the "Spoofer" embedder and in the "Delusion" embedder. In the first case, the only embedder that the opponent is optimizing over is the one with "go-straight bot" (or "swerve bot", depending on what you substitute in). In the second case, the opponent is optimizing over the same three embedders that you are using. with only the "vs. go-straight bot" embedder converges to swerving, and with only the "vs. swerve bot" embedder converges to going straight. So, in the "Spoofer" embedder, the argmax agent will converge to thinking that "go straight" is the best thing to plug in because it gets a straight-swerve outcome.
Assume for the sake of contradiction that the agent (with the 3 embedders listed) converges to going straight. Then on the true environment, it will keep getting straight-straight outcomes, while the "Spoofer" embedder keeps predicting that you'll get straight-swerve outcomes. So, the "Spoofer" embedder eventually fails the reality filter, so the agent will learn to not use it. Both of the remaining embedders advise swerving as getting the best outcome, so the agent converges to swerving, and we have a contradiction.
What went wrong in the original reasoning? As far as I can tell, it originated from accidentally treating " with only one embedder" and " with the three listed embedders" as the same, because "" was used to denote both of them.
Evil Problems and Theorem Failure:
To make things more confusing, the second theorem in the ADT paper , about argmax dominating all agents in , is wrong, and not in a fixable way. There's an explicit counterexample.
It will be instructive to take a detour to talk about drnickbone's evil problem [LW · GW]. Defining a "fair" problem in decision theory is necessary, because you can't just say that a decision theory is the best on all problems. Consider the problem where you are up against an opponent who just really hates some particular decision theory, and penalizes anyone that uses it to select actions. Of course, the decision theory of your choice will fail on this problem. So, instead, we would hope that there's some notion of a "fair" problem, such that we can say that a decision theory attains equal or greater utility than all competitors on all fair problems.
An initial attempt at defining a fair problem was one where all agents that select the same action get the same result. The problem is fully action-determined, and your payoff doesn't depend at all on your ritual of cognition. This is the notion of "fair" used in the original paper.
The Evil problem is a decision theory problem that is completely action-determined, and fair by the old definition of "fair" where (decision theory of your choice, heretofore abbreviated as for "Your Decision Theory") gets systematically lower utility than most other competing decision theories. Consider a variant of Newcomb's problem where there are two boxes, and you may select one of them. Omega's decision process is to predict what box would take, and then put 1000 dollars in the other box, and nothing in the box that takes. Also, one of the boxes is lustrous and sparkly and would make a nice addition to your box collection, and you value that box at 10 dollars.
Now, you are like "well, I'm using , and Omega is really accurate, and I left my random number generator at home, so no matter which box I pick, I'll get 0 dollars. Might as well go for the shiny box". And all the other decision theories like and can run through that reasoning and take the blank box, which contains 1000 dollars, and walk away substantially richer than before. Note that any agent that takes the same box gets the same payoff. So it is intuitively unfair, despite being completely action-determined.
This same sort of problem, when put into embedder form, leads to argmax systematically underperforming "take the blank box". The argmax agent will converge to taking the shiny box about of the time. This is because, since it is possible to go back and compute what "blank-box" and "shiny-box" would have yielded on turns where you didn't take that box, you keep thinking that they'd get decent payoffs. In expectation, "blank-box" gets 502.5 dollars, and "shiny-box" gets 502.5 dollars, while "argmax" gets 5.025 dollars. This is an issue for all decision theories that operate by treating the environment as a function, and plugging actions into it. It will always consider the action it didn't take to have a higher utility, and this is actually true, because there are "objective counterfactuals" that can be checked. In particular, condition 1 in my probabilistic tiling post [AF · GW] was required specifically to rule out these sorts of shenanigans. Check the discussion on why fails this condition, it's talking about how these sorts of evil problems cause the expected utility of "I take an action" to systematically deviate from the expected utility of the action you actually take.
To return back to since it's treating the environment as a function that it can plug stuff into, it's vulnerable to this exploit.
So, what went wrong in the original proof that argmax dominates everything in ? It was the (not explicitly listed) step that went from (which is true because and are in a maximal equivalence class) to (which is a necessary step to get argmax to have the same property). However, just because the utilities are the same doesn't mean the probability distributions are the same!! In fact, given the continuous version of argmax in that paper, it's possible to come up with a much more mundane case where it fails that definitely isn't an evil problem. Consider the problem where you must make the same move as your opponent, and your probability distribution over moves is the same as your opponent's probability distribution over moves. Let and be the agents that just implement the two constant strategies. , but, by the definition of continuous argmax that was given in the paper, argmax would converge to a 50/50 mix of the two strategies since they are of equal value. When this is substituted into both your and your opponent's moveslot, it produces an expected utility of .
What is Fairness?:
These evil problems are a problem for showing that argmax dominates everything in . While attempting a proof, I came up with a possible alternate definition of "fair", but it's more accurate to call it "regret-free". An embedder is "regret-free" iff
.
Intuitively, argmax doesn't regret its decision, in the sense of not wishing it was optimizing for some specific other world to get a more fortunate sequence of actions. The agent doesn't wish it was deluded about the world in order to be decorrelated with whatever is punishing its action sequence. One effect of this definition is that it shows that (according to the expectation of the agent), argmax will do better than any particular strategy in the strategy set, because you can consider an environment that rewards that exact strategy being played on every round. Another notable effect is that it rules out the original Evil problem as "unfair", but the modified variant where the action of the agent is substituted into both your action, and Omega's prediction, is fair. So there's still a regret-free/"fair" environment that can model the Evil problem faithfully, but it says that the proper thing to judge against isn't environments where Omega penalizes , but rather environments where Omega penalizes the decision theory of whoever is picking the boxes. And of course, , , and everything else also gets either 0 or 10 dollars worth of value in this modified problem, and balance is restored.
Sadly, this particular definition of "fair" is inadequate for general use, because it is possible to construct environments where some arbitrary decision theory other than argmax systematically gets lower utility than argmax. This can be fixed by making this definition relative to the agent under consideration, but then you run into the problem of simple agents calling (intuitively fair) games "unfair relative to me". There will be another post about this.
Lack-of-Proof-of-Optimality for ADT:
We might try to go for a theorem like
Conjecture 1: If all environments in are regret-free, then, on all embedders s.t. , dominates .
In short, seems like it will learn to do as well as argmax on all environments that don't get ruled out. This conjecture is still open! I thought I had a proof, and it failed, and then I thought I had a disproof, and it failed, and then I thought I had another disproof, and it turned out that I couldn't show that one of the environments was regret-free. Maybe some extra conditions are required for this conjecture to be a theorem, but I suspect that the environment is actually regret-free and the conjecture is false, pointing to a genuine hole in .
I'll describe the first and last of these failed attempts here, in the hopes that they will provide help on how to conclusively prove or disprove the conjecture.
Failed Attempt 1:
To begin with, if the true environment is in , and one other environment is "clearly in the lead" (ie, for some fixed ), and this opportunity recurs infinitely often, it is possible to money-pump this. In particular, since (the reality filter for the true environment) converges to , . The money pump works by buying one share in , selling one share in , selling one share in , and buying one share in . Because both of the latter have prices very similar to each other (because both and pass the reality-filter), their price cancels out, yielding dollars upfront from the first buy-sell pair, along with the fact that is a regret-free environment, so is priced at or below in the limit. Now, because the embedder will be actually picked, all the purchased and sold shares have value that cancels out to . (well, actually, it's a bit more complicated than that, you need to add an additional condition that the information about what copies shows up in the deductive process in a timely manner, because logical inductors judge the value of a trader by their worst-case value.) Anyways, this pumps free money from each time where one particular embedder is "in the lead" above the true environment.
However, this doesn't rule out cases where multiple embedders that aren't the true environment are "in the lead". Intuitively, it's possible to have a pileup where multiple embedders have approximately the same estimate of their argmax value, and estimate of what actually does in them. There's another case where the true environment is "in the lead" along with several others. The obvious fix to this is to have the trader buy and sell conditional contracts, so the money-pump goes through for whichever embedder is actually selected, and all the others cancel out to value. The point where this fails is that it wouldn't necessarily be buying the shares as originally stated. In order to get the share, it would be purchasing the share at a price of which may not equal . Attempts to construct more sophisticated money-pumps all met with failure due to this specific issue where regret-free environments may say that argmax does worse when copies it.
Failed Attempt 2:
We will give a set of environments, and see that two of the embedders will converge to passing the reality filter, but dominates neither of the argmax agents (however, one of the environments is not known to be regret-free.)
In particular, let the true environment be a game of chicken against the same agent. will consist of the three actions "swerve", "straight", and "consult random number generator in environment to swerve of the time" (augmenting to allow consulting the random number generator with different randomization probabilities doesn't change the result.) will consist of the two environments (which substitutes the action into both your action slot and the opponent's action slot), and (which substutes the action in your action slot, and the opponent is you. The utility function is for going straight while your opponent swerves, for crashing, and for swerving.
To begin with, , because both environments are identical when you plug in the agent itself, so both of them pass the reality filter. Argmax
on the embedder converges to taking the action "swerve of the time".
Assume that learns to use the embedder , predictably, infinitely often (that is, infinitely often, the market assigns a probability very near 1 of using the embedder ). Then, on this sequence, it will converge to swerving of the time, and in that case, will learn to always use the embedder in response, and go straight, because this passes the reality filter and offers a payoff of , which is greater than the payoff of that offers. So, we have a contradiction.
Assume that learns to use the embedder , predictably, infinitely often. Then, if the probability assigns to itself going straight is , it goes straight, if the probability assigns to itself going straight is , it swerves. So the probability assigns to itself going straight on that subsequence will converge to , which means that , and because promises a greater utility, it will learn to use the embedder in those cases. So again, we have a contradiction.
So, according to the probabilities of the logical inductor itself, it is unpredictable what will do, and by calibration, this applies to what does in reality. Since differences in and allow predicting which embedder will pick, the two must converge to each other, and since the latter converges to , the former must as well.
Now (I'm moderately sure it's the only solution, but don't have a proof), there is a stable solution where the agent doesn't know which embedder it picks, but with probability it picks , and with probability it picks (and argmax always picks "go straight", so it runs into itself), for an expected utility of , while both embedders claim that argmax for them specifically gets an expected utility of . Because it's unpredictable which embedder it will go with, the expected value of going straight (in embedder ) is , the expected value of swerving is , and the expected value of randomizing is . So argmax converges to always going straight, for an expected utility (in embedder ) of .
14 comments
Comments sorted by top scores.
comment by jessicata (jessica.liu.taylor) · 2018-09-27T08:32:15.274Z · LW(p) · GW(p)
Nice catch on both the optimality result being wrong and ADT not crashing into itself on the chicken problem! It's really great to know these things. This indicates that we weren't checking decision theory arguments with enough rigor.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-10-20T03:56:11.276Z · LW(p) · GW(p)
OK, I checked this more and I'm more suspicious now.
First: in the ADT paper, the asymptotic dominance argument is about the limit of the agent's action as epsilon goes to 0. This limit is not necessarily computable, so the embedder can't contain the agent, since it doesn't know epsilon. So the evil problem doesn't work. The optimality proof might be valid. I didn't understand which specific step you thought was wrong.
Second: This also means the chicken problem is ill-typed, since you can't put an ADT in the environment. But there is a well-typed version where you give the same epsilon to both ADT agents, and the embedders have epsilon hard-coded.
Consider the following embedder. According to this embedder, you will play chicken against ADT-epsilon who knows who you are. When ADT-epsilon considers this embedder, it will always pass the reality filter, since in fact ADT-epsilon is playing against ADT-epsilon. Furthermore, this embedder gives NeverSwerveBot a high utility. So ADT-epsilon expects a high utility from this embedder, through NeverSwerveBot, and it never swerves.
Replies from: Diffractor↑ comment by Diffractor · 2018-10-25T20:54:42.492Z · LW(p) · GW(p)
in the ADT paper, the asymptotic dominance argument is about the limit of the agent's action as epsilon goes to 0. This limit is not necessarily computable, so the embedder can't contain the agent, since it doesn't know epsilon. So the evil problem doesn't work.
Agreed that the evil problem doesn't work for the original ADT paper. In the original ADT paper, the agents are allowed to output distributions over moves. I didn't like this because it implicitly assumes that it's possible for the agent to perfectly randomize, and I think randomization is better modeled by a (deterministic) action that consults an environmental random-number generator, which may be correlated with other things.
What I meant was that, in the version of argmax that I set up, if is the two constant policies "take blank box" and "take shiny box", then for the embedder where the opponent runs argmax to select which box to fill, the argmax agent will converge to deterministically randomizing between the two policies, by the logical inductor assigning very similar expected utility to both options such that the inductor can't predict which action will be chosen. And this occurs because the inductor outputting more of "take the blank box" will have converge to a higher expected value (so argmax will learn to copy that), and the inductor outputting more of "take the shiny box" will have converge to a higher expected value (so argmax will learn to copy that).
The optimality proof might be valid. I didn't understand which specific step you thought was wrong.
So, the original statement in the paper was
It must then be the case that for every . Let be the first element of in . Since every class will be seperated by at least in the limit, will eventually be a distribution over just . And since for every , , by the definition of it must be the case that .
The issue with this is the last sentence. It's basically saying "since the two actions and get equal expected utility in the limit, the total variation distance between a distribution over the two actions, and one of the actions, limits to zero", which is false
And it is specifically disproved by the second counterexample, where there are two actions that both result in 1 utility, so they're both in the same equivalence class, but a probabilistic mixture between them (as converges to playing, for all ) gets less than 1 utility.
Consider the following embedder. According to this embedder, you will play chicken against ADT-epsilon who knows who you are. When ADT-epsilon considers this embedder, it will always pass the reality filter, since in fact ADT-epsilon is playing against ADT-epsilon. Furthermore, this embedder gives NeverSwerveBot a high utility. So ADT-epsilon expects a high utility from this embedder, through NeverSwerveBot, and it never swerves.
You'll have to be more specific about "who knows what you are". If it unpacks as "opponent only uses the embedder where it is up against [whatever policy you plugged in]", then NeverSwerveBot will have a high utility, but it will get knocked down by the reality filter, because if you converge to never swerving, will converge to 0, and the inductor will learn that so it will converge to assigning equal expected value to and, and converges to 1.
If it unpacks as "opponent is ADT-epsilon", and you converge to never swerving, then argmaxing will start duplicating the swerve strategy instead of going straight. In both cases, the argument fails.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-10-27T08:37:27.785Z · LW(p) · GW(p)
In the original ADT paper, the agents are allowed to output distributions over moves.
The fact that we take the limit as epsilon goes to 0 means the evil problem can't be constructed, even if randomization is not allowed. (The proof in the ADT paper doesn't work, but that doesn't mean something like it couldn't possibly work)
It's basically saying "since the two actions A and A′ get equal expected utility in the limit, the total variation distance between a distribution over the two actions, and one of the actions, limits to zero", which is false
You're right, this is an error in the proof, good catch.
Re chicken: The interpretation of the embedder that I meant is "opponent only uses the embedder where it is up against [whatever policy you plugged in]". This embedder does not get knocked down by the reality filter. Let be the embedder. The logical inductor expects to equal the crash/crash utility, and it also expects to equal the crash/crash utility. The expressions and are provably equal, so of course the logical inductor expects them to be the same, and the reality check passes.
The error in your argument is that you are embedding actions rather than agents. The fact that NeverSwerveBot and ADT both provably always take the straight action does not mean the embedder assigns them equal utilities.
Replies from: Diffractor↑ comment by Diffractor · 2018-10-29T21:50:53.040Z · LW(p) · GW(p)
Wasn't there a fairness/continuity condition in the original ADT paper that if there were two "agents" that converged to always taking the same action, then the embedder would assign them the same value? (more specifically, if , then ) This would mean that it'd be impossible to have be low while is high, so the argument still goes through.
Although, after this whole line of discussion, I'm realizing that there are enough substantial differences between the original formulation of ADT and the thing I wrote up that I should probably clean up this post a bit and clarify more about what's different in the two formulations. Thanks for that.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-10-29T22:31:46.519Z · LW(p) · GW(p)
Yes, the continuity condition on embedders in the ADT paper would eliminate the embedder I meant. Which means the answer might depend on whether ADT considers discontinuous embedders. (The importance of the continuity condition is that it is used in the optimality proof; the optimality proof can't apply to chicken for this reason).
comment by jessicata (jessica.liu.taylor) · 2018-09-27T05:28:28.097Z · LW(p) · GW(p)
The most prominent undesirable feature of this is that it's restricted to a finite set of embedders. Optimistic choice fails very badly on an infinite set of embedders, because we can consider an infinite sequence of embedders that are like "pressing the button dispenses 1 utility forever", "pressing the button delivers an electric shock the first time, and then dispenses 1 utility forever"... "pressing the button delivers an electric shock for the first n times, and then dispenses 1 utility forever"... "pressing the button just always delivers an electric shock". Optimistic choice among embedders keeps pressing the button, because, although it keeps getting shocked, there's always an embedder that promises that the agent's fortunes will turn around on the next turn.
Seems like infinite bandit algorithms should work here? Basically just give more complex embedders some regularization penalty.
comment by Diffractor · 2018-09-28T05:22:26.498Z · LW(p) · GW(p)
I got an improved reality-filter that blocks a certain class of environments that lead conjecture 1 to fail, although it isn't enough to deal with the provided chicken example and lead to a proof of conjecture 1. (the subscripts will be suppressed for clarity)
Instead of the reality-filter for being
it is now
This doesn't just check whether reality is recovered on average, it also checks whether all the "plausible conditionals" line up as well. Some of the conditionals may not be well-formed, as there may be conditioning on low-or-zero probability events, but these are then multiplied by a very small number, so no harm is done.
This has the nice property that for all "plausibly chosen embedders" that have a probability sufficiently far away from 0, all embedders and that pass this reality filter have the property that
So all embedders that pass the reality filter will agree on the expected utility of selecting a particular embedder that isn't very unlikely to be selected.
comment by jessicata (jessica.liu.taylor) · 2018-09-27T05:23:44.809Z · LW(p) · GW(p)
ADT (asymptotic decision theory) was an initial attempt at decision theory with logical inductors before the standard form (that has exploration steps), which is detailed in this post.
I'm confused by this sentence. First, what is the standard form? ADT was definitely invented after logical EDT with exploration (the thing you link to). Second, why do you link to a post on logical EDT and not to the ADT paper?
Replies from: Diffractor↑ comment by Diffractor · 2018-09-27T08:06:03.342Z · LW(p) · GW(p)
I think I remember the original ADT paper showing up on agent foundations forum before a writeup on logical EDT with exploration, and my impression of which came first was affected by that. Also, the "this is detailed in this post" was referring to logical EDT for exploration. I'll edit for clarity.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-09-27T08:11:54.119Z · LW(p) · GW(p)
OK, I helped invent ADT so I know it conceptually came after. (I don't think it was "shortly after"; logical EDT was invented very shortly after logical inductors, in early 2016, and ADT was in late 2016). I think you should link to the ADT paper in the intro section so people know what you're talking about.
comment by mako yass (MakoYass) · 2018-09-30T04:33:32.862Z · LW(p) · GW(p)
The link to Tsvi's post seems to be broken https://github.com/tsvibt/decision-theory-notation/blob/master/main.pdf
comment by jessicata (jessica.liu.taylor) · 2018-09-27T08:27:59.315Z · LW(p) · GW(p)
This does not pay up on counterfactual blackmail.
What's counterfactual blackmail?
EDIT: if you meant counterfactual mugging, I think one way to solve this is to use a low amount of computation power to select which agent to emulate, then use a high amount of computation power to run that agent. Of course, this is somewhat unsatisfying, since there isn't a canonical way of choosing how much less computation power to use.
Replies from: Diffractor↑ comment by Diffractor · 2018-09-27T16:01:20.403Z · LW(p) · GW(p)
Yup, I meant counterfactual mugging. Fixed.