## Posts

Reflective oracles as a solution to the converse Lawvere problem 2018-11-29T03:23:51.180Z
Iteration Fixed Point Exercises 2018-11-22T00:35:09.885Z
Diagonalization Fixed Point Exercises 2018-11-18T00:31:19.683Z
Topological Fixed Point Exercises 2018-11-17T01:40:06.342Z
Logical inductor limits are dense under pointwise convergence 2016-10-06T08:07:53.000Z
A Counterexample to an Informal Conjecture on Proof Length and Logical Counterfactuals 2015-07-17T17:34:51.000Z

## Comments

Comment by SamEisenstat on Alignment proposals and complexity classes · 2020-07-22T02:06:07.567Z · LW · GW

Theorem. Weak HCH (and similar proposals) contain EXP.

Proof sketch: I give a strategy that H can follow to determine whether some machine that runs in time accepts. Basically, we need to answer questions of the form "Does cell have value at time ." and "Was the head in position at time ?", where and are bounded by some function in . Using place-system representations of and , these questions have length in , so they can be asked. Further, each question is a simple function of a constant number of other such questions about earlier times as long as , and can be answered directly in the base case .

Comment by SamEisenstat on The Cartoon Guide to Löb's Theorem · 2020-04-21T07:03:42.040Z · LW · GW

*I* think that there's a flaw in the argument.

I could elaborate, but maybe you want to think about this more, so for now I'll just address your remark about , where is refutable. If we assume that , then, since is false, must be false, so must be true. That is, you have proven that PA proves , that is, since is contradictory, PA proves its own inconsistency. You're right that this is compatible with PA being consistent--PA may be consistent but prove its own inconsistency--but this should still be worrying.

Comment by SamEisenstat on Adding Up To Normality · 2020-03-25T06:01:30.660Z · LW · GW

This reminds me of the Discourse on Method.

[T]here is seldom so much perfection in works composed of many separate parts, upon which different hands had been employed, as in those completed by a single master. Thus it is observable that the buildings which a single architect has planned and executed, are generally more elegant and commodious than those which several have attempted to improve, by making old walls serve for purposes for which they were not originally built. Thus also, those ancient cities which, from being at first only villages, have become, in course of time, large towns, are usually but ill laid out compared with the regularity constructed towns which a professional architect has freely planned on an open plain; so that although the several buildings of the former may often equal or surpass in beauty those of the latter, yet when one observes their indiscriminate juxtaposition, there a large one and here a small, and the consequent crookedness and irregularity of the streets, one is disposed to allege that chance rather than any human will guided by reason must have led to such an arrangement. And if we consider that nevertheless there have been at all times certain officers whose duty it was to see that private buildings contributed to public ornament, the difficulty of reaching high perfection with but the materials of others to operate on, will be readily acknowledged. In the same way I fancied that those nations which, starting from a semi-barbarous state and advancing to civilization by slow degrees, have had their laws successively determined, and, as it were, forced upon them simply by experience of the hurtfulness of particular crimes and disputes, would by this process come to be possessed of less perfect institutions than those which, from the commencement of their association as communities, have followed the appointments of some wise legislator. It is thus quite certain that the constitution of the true religion, the ordinances of which are derived from God, must be incomparably superior to that of every other. And, to speak of human affairs, I believe that the pre-eminence of Sparta was due not to the goodness of each of its laws in particular, for many of these were very strange, and even opposed to good morals, but to the circumstance that, originated by a single individual, they all tended to a single end. In the same way I thought that the sciences contained in books (such of them at least as are made up of probable reasonings, without demonstrations), composed as they are of the opinions of many different individuals massed together, are farther removed from truth than the simple inferences which a man of good sense using his natural and unprejudiced judgment draws respecting the matters of his experience. And because we have all to pass through a state of infancy to manhood, and have been of necessity, for a length of time, governed by our desires and preceptors (whose dictates were frequently conflicting, while neither perhaps always counseled us for the best), I farther concluded that it is almost impossible that our judgments can be so correct or solid as they would have been, had our reason been mature from the moment of our birth, and had we always been guided by it alone.
It is true, however, that it is not customary to pull down all the houses of a town with the single design of rebuilding them differently, and thereby rendering the streets more handsome; but it often happens that a private individual takes down his own with the view of erecting it anew, and that people are even sometimes constrained to this when their houses are in danger of falling from age, or when the foundations are insecure. With this before me by way of example, I was persuaded that it would indeed be preposterous for a private individual to think of reforming a state by fundamentally changing it throughout, and overturning it in order to set it up amended; and the same I thought was true of any similar project for reforming the body of the sciences, or the order of teaching them established in the schools: but as for the opinions which up to that time I had embraced, I thought that I could not do better than resolve at once to sweep them wholly away, that I might afterwards be in a position to admit either others more correct, or even perhaps the same when they had undergone the scrutiny of reason. I firmly believed that in this way I should much better succeed in the conduct of my life, than if I built only upon old foundations, and leaned upon principles which, in my youth, I had taken upon trust. For although I recognized various difficulties in this undertaking, these were not, however, without remedy, nor once to be compared with such as attend the slightest reformation in public affairs. Large bodies, if once overthrown, are with great difficulty set up again, or even kept erect when once seriously shaken, and the fall of such is always disastrous. Then if there are any imperfections in the constitutions of states (and that many such exist the diversity of constitutions is alone sufficient to assure us), custom has without doubt materially smoothed their inconveniences, and has even managed to steer altogether clear of, or insensibly corrected a number which sagacity could not have provided against with equal effect; and, in fine, the defects are almost always more tolerable than the change necessary for their removal; in the same manner that highways which wind among mountains, by being much frequented, become gradually so smooth and commodious, that it is much better to follow them than to seek a straighter path by climbing over the tops of rocks and descending to the bottoms of precipices.
...
And finally, as it is not enough, before commencing to rebuild the house in which we live, that it be pulled down, and materials and builders provided, or that we engage in the work ourselves, according to a plan which we have beforehand carefully drawn out, but as it is likewise necessary that we be furnished with some other house in which we may live commodiously during the operations, so that I might not remain irresolute in my actions, while my reason compelled me to suspend my judgement, and that I might not be prevented from living thenceforward in the greatest possible felicity, I formed a provisory code of morals, composed of three or four maxims, with which I am desirous to make you acquainted.
The first was to obey the laws and customs of my country, adhering firmly to the faith in which, by the grace of God, I had been educated from my childhood and regulating my conduct in every other matter according to the most moderate opinions, and the farthest removed from extremes, which should happen to be adopted in practice with general consent of the most judicious of those among whom I might be living. For as I had from that time begun to hold my own opinions for nought because I wished to subject them all to examination, I was convinced that I could not do better than follow in the meantime the opinions of the most judicious; and although there are some perhaps among the Persians and Chinese as judicious as among ourselves, expediency seemed to dictate that I should regulate my practice conformably to the opinions of those with whom I should have to live; and it appeared to me that, in order to ascertain the real opinions of such, I ought rather to take cognizance of what they practised than of what they said, not only because, in the corruption of our manners, there are few disposed to speak exactly as they believe, but also because very many are not aware of what it is that they really believe; for, as the act of mind by which a thing is believed is different from that by which we know that we believe it, the one act is often found without the other. Also, amid many opinions held in equal repute, I chose always the most moderate, as much for the reason that these are always the most convenient for practice, and probably the best (for all excess is generally vicious), as that, in the event of my falling into error, I might be at less distance from the truth than if, having chosen one of the extremes, it should turn out to be the other which I ought to have adopted. And I placed in the class of extremes especially all promises by which somewhat of our freedom is abridged; not that I disapproved of the laws which, to provide against the instability of men of feeble resolution, when what is sought to be accomplished is some good, permit engagements by vows and contracts binding the parties to persevere in it, or even, for the security of commerce, sanction similar engagements where the purpose sought to be realized is indifferent: but because I did not find anything on earth which was wholly superior to change, and because, for myself in particular, I hoped gradually to perfect my judgments, and not to suffer them to deteriorate, I would have deemed it a grave sin against good sense, if, for the reason that I approved of something at a particular time, I therefore bound myself to hold it for good at a subsequent time, when perhaps it had ceased to be so, or I had ceased to esteem it such.
My second maxim was to be as firm and resolute in my actions as I was able, and not to adhere less steadfastly to the most doubtful opinions, when once adopted, than if they had been highly certain; imitating in this the example of travelers who, when they have lost their way in a forest, ought not to wander from side to side, far less remain in one place, but proceed constantly towards the same side in as straight a line as possible, without changing their direction for slight reasons, although perhaps it might be chance alone which at first determined the selection; for in this way, if they do not exactly reach the point they desire, they will come at least in the end to some place that will probably be preferable to the middle of a forest. In the same way, since in action it frequently happens that no delay is permissible, it is very certain that, when it is not in our power to determine what is true, we ought to act according to what is most probable; and even although we should not remark a greater probability in one opinion than in another, we ought notwithstanding to choose one or the other, and afterwards consider it, in so far as it relates to practice, as no longer dubious, but manifestly true and certain, since the reason by which our choice has been determined is itself possessed of these qualities. This principle was sufficient thenceforward to rid me of all those repentings and pangs of remorse that usually disturb the consciences of such feeble and uncertain minds as, destitute of any clear and determinate principle of choice, allow themselves one day to adopt a course of action as the best, which they abandon the next, as the opposite.

(This is probably 5% of the text. There is more interesting stuff there, but it's less relevant to this post.)

Comment by SamEisenstat on A Loophole for Self-Applicative Soundness · 2018-06-12T08:47:23.000Z · LW · GW

As you say, this isn't a proof, but it wouldn't be too surprising if this were consistent. There is some such that has a proof of length by a result of Pavel Pudlák (On the length of proofs of finitistic consistency statements in first order theories). Here I'm making the dependence on explicit, but not the dependence on . I haven't looked at it closely, but the proof strategy in Theorems 5.4 and 5.5 suggests that will not depend on , as long as we only ask for the weaker property that will only be provable in length for sentences of length at most .

Comment by SamEisenstat on A Loophole for Self-Applicative Soundness · 2018-06-05T20:46:31.000Z · LW · GW

I misunderstood your proposal, but you don't need to do this work to get what you want. You can just take each sentence as an axiom, but declare that this axiom takes symbols to invoke. This could be done by changing the notion of length of a proof, or by taking axioms and with very long.

Comment by SamEisenstat on Open question: are minimal circuits daemon-free? · 2018-05-09T01:52:01.313Z · LW · GW

Yeah, I had something along the lines of what Paul said in mind. I wanted not to require that the circuit implement exactly a given function, so that we could see if daemons show up in the output. It seems easier to define daemons if we can just look at input-output behaviour.

Comment by SamEisenstat on Open question: are minimal circuits daemon-free? · 2018-05-08T20:01:10.112Z · LW · GW

I'm having trouble thinking about what it would mean for a circuit to contain daemons such that we could hope for a proof. It would be nice if we could find a simple such definition, but it seems hard to make this intuition precise.

For example, we might say that a circuit contains daemons if it displays more optimization that necessary to solve a problem. Minimal circuits could have daemons under this definition though. Suppose that some function describes the behaviour of some powerful agent, a function is like with noise added, and our problem is to predict sufficiently well the function . Then, the simplest circuit that does well won't bother to memorize a bunch of noise, so it will pursue the goals of the agent described by more efficiently than , and thus more efficiently than necessary.

Comment by SamEisenstat on No Constant Distribution Can be a Logical Inductor · 2018-04-14T01:39:38.000Z · LW · GW

Two minor comments. First, the bitstrings that you use do not all correspond to worlds, since, for example, implies , as is a subtheory of . This can be fixed by instead using a tree of sentences that all diagonalize against themselves. Tsvi and I used a construction in this spirit in A limit-computable, self-reflective distribution, for example.

Second, I believe that weakening #2 in this post also cannot be satisfied by any constant distribution. To sketch my reasoning, a trader can try to buy a sequence of sentences , spending 2^{-n}\$ on the $$n$$th sentence $$\phi_1 \wedge \dots \wedge \phi_n$$. It should choose the sequence of sentences so that $$\phi_1 \wedge \dots \wedge \phi_n$$ has probability at most $$2^{-n}$$, and then it will make an infinite amount of money if the sentences are simultaneously true.

The way to do this is to choose each from a list of all sentences. If at any point you notice that has too high a probability, then pick a new sentence for . We can sell all the conjunctions for and get back the original amount payed by hypothesis. Then, if we can keep using sharper continuous tests of the probabilities of the sentences over time, we will settle down to a sequence with the desired property.

In order to turn this sketch into a proof, we need to be more careful about how these things are to be made continuous, but it seems routine.

Comment by SamEisenstat on An Untrollable Mathematician · 2018-01-24T03:04:06.000Z · LW · GW

I at first didn't understand your argument for claim (2), so I wrote an alternate proof that's a bit more obvious/careful. I now see why it works, but I'll give my version below for anyone interested. In any case, what you really mean is the probability of deciding a sentence outside of by having it announced by nature; there may be a high probability of sentences being decided indirectly via sentences in .

Instead of choosing as you describe, pick so that the probability of sampling something in is greater than . Then, the probability of sampling something in is at least . Hence, no matter what sentences have been decided already, the probability that repeatedly sampling from selects before it selects any sentence outside of is at least

as desired.

Furthermore, this argument makes it clear that the probability distribution we converge to depends only on the set of sentences which the environment will eventually assert, not on their ordering!

Oh, I didn't notice that aspect of things. That's pretty cool.

Comment by SamEisenstat on Some Criticisms of the Logical Induction paper · 2017-07-22T12:03:14.000Z · LW · GW

A few thoughts:

I agree that the LI criterion is "pointwise" in the way that you describe, but I think that this is both pretty good and as much as could actually be asked. A single efficiently computable trader can do a lot. It can enforce coherence on a polynomially growing set of sentences, search for proofs using many different proof strategies, enforce a polynomially growing set of statistical patterns, enforce reflection properties on a polynomially large set of sentences, etc. So, eventually the market will not be exploitable on all these things simultaneously, which seems like a pretty good level of accurate beliefs to have.

On the other side of things, it would be far to strong to ask for a uniform bound of the form "for every , there is some day such that after step , no trader can multiply its wealth by a factor more than ". This is because a trader can be hardcoded with arbitrarily many hard-to-compute facts. For every , there must eventually be a day on which the belief of your logical inductor assign probability less than to some true statement, at which point a trader who has that statement hardcoded can multiply its wealth by . (I can give a construction of such a sentence using self-reference if you want, but it's also intuitively natural - just pick many mutually exclusive statements with nothing to break the symmetry.)

Thus, I wouldn't think of traders as "mistakes", as you do in the post. A trader can gain money on the market if the market doesn't already know all facts that will be listed by the deductive process, but that is a very high bar. Doing well against finitely many traders is already "pretty good".

What you can ask for regarding uniformity is for some simple function such that any trader can multiply its wealth by at most a factor . This is basically the idea of the mistake bound model in learning theory; you bound how many mistakes happen rather than when they happen. This would let you say a more than the one-trader properties I mentioned in my first paragraph. In fact, has this propery; is just the initial wealth of the trader. You may therefore want to do something like setting traders' initial wealths according to some measure of complexity. Admittedly this isn't made explicit in the paper, but there's not much additional that needs to be done to think in this way; it's just the combination of the individual proofs in the paper with the explicit bounds you get from the initial wealths of the traders involved.

I basically agree completely on your last few points. The traders are a model class, not an ensemble method in any substantive way, and it is just confusing to connect them to the papers on ensemble methods that the LI paper references. Also, while I use the idea of logical induction to do research that I hope will be relevant to practical algorithms, it seems unlikely than any practical algorithm will look much like a LI. For one thing, finding fixed points is really hard without some property stronger than continuity, and you'd need a pretty good reason to put it in the inner loop of anything.

Comment by SamEisenstat on Two Questions about Solomonoff Induction · 2017-07-05T22:20:44.000Z · LW · GW

Universal Prediction of Selected Bits solves the related question of what happens if the odd bits are adversarial but the even bits copy the preceding odd bits. Basically, the universal semimeasure learns to do the right thing, but the exact sense in which the result is positive is subtle and has to do with the difference between measures and semimeasures. The methods may also be relevant to the questions here, though I don't see a proof for either question yet.

Comment by SamEisenstat on An Approach to Logically Updateless Decisions · 2017-06-11T23:21:05.000Z · LW · GW

Yeah, I like tail dependence.

There's this question of whether for logical uncertainty we should think of it more as trying to "un-update" from a more logically informed perspective rather than trying to use some logical prior that exists at the beginning of time. Maybe you've heard such ideas from Scott? I'm not sure if that's the right perspective, but it's what I'm alluding to when I say you're introducing artificial logical uncertainty.

Comment by SamEisenstat on Two Major Obstacles for Logical Inductor Decision Theory · 2017-06-10T06:31:25.000Z · LW · GW

Yeah, the 5 and 10 problem in the post actually can be addressed using provability ideas, in a way that fits in pretty natually with logical induction. The motivation here is to work with decision problems where you can't prove statements for agent , utility function , action , and utility value , at least not with the amount of computing power provided, but you want to use inductive generalizations instead. That isn't necessary in this example, so it's more of an illustration.

To say a bit more, if you make logical inductors propositionally consistent, similarly to what is done in this post, and make them assign things that have been proven already probability 1, then they will work on the 5 and 10 problem in the post.

It would be interesting if there was more of an analogy to explore between the provability oracle setting and the inductive setting, and more ideas could be carried over from modal UDT, but it seems to me that this is a different kind of problem that will require new ideas.

Comment by SamEisenstat on Two Major Obstacles for Logical Inductor Decision Theory · 2017-06-10T01:16:37.000Z · LW · GW

It's hard to analyze the dynamics of logical inductors too precisely, so we often have to do things that feel like worst-case analysis, like considering an adversarial trader with sufficient wealth. I think that problems that show up from this sort of analysis can be expected to correspond to real problems in superintelligent agents, but that is a difficult question. The malignancy of the universal prior is part of the reason.

As to your proposed solution, I don't see how it would work. Scott is proposing that the trader makes conditional contracts, which are in effect voided if the event that they are conditional on doesn't happen, so the trader doesn't actually lose anything is this case. (It isn't discussed in this post, but conditional contracts can be built out of the usual sort of bets, with payoffs given by the definition of conditional probability.) So, in order to make the trader lose money, the events need to happen sometimes, not just be expect to happen with some nonnegligable probability by the market.

Comment by SamEisenstat on An Approach to Logically Updateless Decisions · 2017-06-03T00:24:55.000Z · LW · GW

In counterfactual mugging with a logical coin, AsDT always uses a logical inductor’s best-estimate of the utility it would get right now, so it sees the coin as already determined, and sees no benefit from giving Omega money in the cases where Omega asks for money.

The way I would think about what's going on is that if the coin is already known at the time that the expectations are evaluated, then the problem isn't convergent in the sense of AsDT. The agent that pays up whenever asked has a constant action, but it doesn't receive a constant expected utility. You can think of the averaging as introducing artificial logical uncertainty to make more things convergent, which is why it's more updateless. (My understanding is that this is pretty close to how you're thinking of it already.)

Comment by SamEisenstat on Generalizing Foundations of Decision Theory II · 2017-04-24T05:08:44.000Z · LW · GW

This isn't too related to your main point, but every ordered field can be embedded into a field of Hahn series, which might be simpler to work with than surreals.

That page discusses the basics of Hahn series, but not the embedding theorem. (Ehrlich, 1995) treats things in detail, but is long and introduces a lot of definitions. The embedding theorem is stated on page 23 (24 in the pdf).

Comment by SamEisenstat on The set of Logical Inductors is not Convex · 2016-09-27T16:20:37.000Z · LW · GW

Nicely done. I should have spent more time thinking about liar sentences; you really can do a lot with them.

Follow-up question - is the set of limits of logical inductors convex? (Your proof also makes me curious as to whether the set of "expanded limits" is convex, though that question is a bit less nice than the other).

Comment by SamEisenstat on A new proposal for logical counterfactuals · 2016-08-27T05:21:37.000Z · LW · GW

I'm copying over some (lightly edited) conversation about this post from Facebook.

Sam Eisenstat: In the original counterexample to the Trolljecture, I guess you're proposing that be logically prior to , so we can go back and replace with , but still have to use in order to derive ? Here I'm just using "" to mean "counterfactually results in" without meaning to imply that this should be defined modally.

I agree that this is intuitively sensible, but it is difficult to give a general definition that agrees with intuition - it's hard to even have an intuition in the general case. There has been some work along these lines though; see for example https://agentfoundations.org/item?id=241 though you may already know about this since you've been talking with Scott. I'm pessimistic about this direction in general, though I'd be happy to talk further about it.

Jack Gallagher: What's the generator behind your being pessimistic?

Sam Eisenstat: Well, like Daniel suggests, we can ask whether the ordering is subjective/based on an agents epistemic state. Unlike Daniel, I think there are problems with this. The best examples are things like transparent Newcomb's problem, where the agent needs to counterfact on things that it already knows the fact of the matter about. You can try to find loopholes in this, e.g. maybe the agent is doing something wrong if it isn't ignorant of certain facts, but I haven't been able to find something that's satisfying (e.g. doesn't fall to other thought experiments).

It thus seems that an agent should just counterfact on things even though it already knows them to be true, but this is a weird sort of thing to call "epistemic". It doesn't just depend on the agent's probability assignments to various statements. Thus, to go farther here would seem to require a richer idea of "agent" than we presently have.

The alternative is an objective temporal ordering on logical statements. (This is a terrible use of the excluded middle, since "subjective" and "objective" are smuggling in a lot, so the place to look is probably somewhere that feels like a third option.) Anyway, it seems absurd for there to be a fact of the matter as to whether 417*596 = 248532 comes temporally before or after 251*622 = 156122.

Part of this is just a feeling of arbitrariness because even if one of those did come before the other, we don't have to tools to know which one. Still, it seems the tools we do have are the wrong shape for building something that would work. For example, there seems to be an affinity between the ideas of being "easy", of depending on few things, but being depended on by many, and of being temporally "early". Patrick's revived trolljecture is one attempt at formalizing it, but there was nothing deep about my counterexample, it's just easy to generate statements that are difficult to evaluate. (There are a different versions of this, like independent statements in undecidable theories, or hard SAT instances in propositional logic.)

I used the word "we" in the last paragraph, but maybe you have type-theory tools that are more suited to this?

Daniel Satanove: It's easier to come up with UDT when you have CDT. Is there a version of logical counterfactuals that works in more intuitive cases, but fail on stranger edge cases like transparent Newcomb's?

Sam Eisenstat: The EDT of logical counterfactuals is just EDT with a prior over logic (e.g. bounded Demski, something more sophisticated). Proof-based UDT is a special case of this; it prescribes actions if any reasonable prior would agree, and is silent otherwise.

Unfortunately, as shown by the trolljecture counterexample, there are counterfactuals that proof-based UDT gets wrong (the trolljecture basically says that what proof-based UDT does should be read as counterfactuals), and therefore that EDT gets wrong given any reasonable prior, so it's hard to define a “domain of validity” for this (not that that's necessary for it be informative).

One candidate for the CDT of logical counterfactuals would just be putting a causal DAG structure on logical statements, like discussed in Scott's forum post that I linked above. The TDT paper is informal, and I haven't looked at it in a while, but I think that you can read it as proposing exactly this.

I believe that in all thought experiments where people have an intuition about what should be done, they are using this sort of reasoning plus strategy selection (i.e. updatelessness). I would be interested in seeing any thought experiments that people do not analyze in this way.

There isn't much in the direction of actually defining such causal networks. Back when the trolljecture looked plausible we tried to interpret it in terms of causal networks, which didn't quite work (see http://epsilonofdoom.blogspot.com/2015/08/provability-counterfactuals-vs-three.html). I expect something like Patrick's revived trolljecture to be even harder to view in this way, since it's inherently approximate; a notion of approximate causal network would be more appropriate here, if one could be found. I don't know of any proposal that (1) actually defines a causal network and (2) works on some easy examples.

Well, that's not entirely true. You can just pick any ordering that puts the agents decision at the beginning, and use the resulting fully-connected DAG. This works wherever proof-based UDT works, but fails on the trolljecture, exactly because it assumes anything that provably follows from your agent's action is causally downstream of it.

Tsvi BT: "One candidate for the CDT of logical counterfactuals would just be putting a causal DAG structure on logical statements"

How does this interact with logical updatelessness? What happens to your DAG as you learn new facts?

Sam Eisenstat: I'm not sure if it's obvious, but the idea is that each node has a full table of counterfactuals. An instance of this has more data than just an ordering.

It's not intended to be updateless; it's a CDT/TDT-style theory rather than UDT-style. You could layer strategy selection on top of it (though it could be computationally difficult if your DAG is hard to work with).

When you learn new facts you condition the probability distribution associated with the DAG on them, like for any Bayesian network.

Tsvi BT: So then this is dynamically inconsistent in the counterfactual mugging with a logical coin.

Sam Eisenstat: Oh yeah, "layer strategy selection on top" doesn't work for logical uncertainty. Anyway, this was in response to Daniel's request for a theory that "works in more intuitive cases"; I hope I didn't give the impression that this would be hard to break.

On the other hand, you could try to "just do strategy selection" with respect to a particular prior if you're willing to treat everything you learn as an observation. This is easy if you pick a coherent prior, but it's uncomputable and it doesn't let you pay counterfactual muggers with logical coins. Abram's GLS stuff tries to do this with priors that don't already know all logical facts. What do you think of that sort of approach?

Tsvi BT: I don't have much more concrete to say. I currently view decision theory as being the questions of how to do logical updatelessness, and how to do strategy selection with a bounded reasoner. The GLS prior might be close, but there are basic open questions about how it works. "Strategic uncertainty" (or something) may or may not just boil down to logical uncertainty; the reason it might not is that you might actually have to define an optimization target other than "what Ideal Me thinks is the best strategy", to take into account that actual versions of you are bounded in different ways. These two "components" (I wildly speculate) could combine to give a reflectively stable decision theory.

Daniel Satanove: Proof lengths probably aren't the only way of doing the time step thing. Take the proof length definition of observation at time k, except delay all proofs of statements of the form a + b = c by one time step. This, or something sort of like this, will probably also work. Also any strictly increasing function on observation time should work.

The observation at time k should be an epistemic thing that describes how an agent learns of new theorems, rather than some fundamental property of math.

Jack Gallagher: What do you mean by "an epistemic thing"?

Daniel Satanove: "In order to break the cycles, we want some notion of logical time from which influence can only propagate forward. Proof length seems, at least to me, to be the most (only?) natural way to pull something like that off. A proof of length k can be thought of as a logical observation at time k."

You are restricting yourself to agents who do a breadth first search through proofs rather than anything that can do a more directed approach. Saying that you want some notion of "logical time" kind of sounds like you want something that is a property of the math rather than a property of the agent.

Comment by SamEisenstat on Strict Dominance for the Modified Demski Prior · 2015-12-05T03:37:59.000Z · LW · GW

[EDIT: This all deals with measures, not semimeasures; see below.]

For to dominate in the old sense means that its Bayes score can only be some constant worse than , no worse, regardless of environment. My definition here implies that, but I think it’s a stricter requirement.

Your definition is equivalent to the standard definition. Li and Vitányi say that dominates iff there is some such that for any binary string , we have . Li and Vitányi's "probability distributions" take a binary string as input while the probability distributions we are using ask for an element of a -algebra, but we can abuse notation by allowing a binary string to represent the event of that the random sequence starts with this string as a prefix.

Theorem. For any probability distributions on Cantor space and any , the condition holds for all binary strings iff it holds for all .

Proof. The direction is immediate. For , suppose that for all binary string events . We first show that this property holds for every event in the algebra generated by the binary string events. We can write any as a disjoint union of binary string events . Then, as desired.

We can extend this to the -algebra generated by , which is just the Borel -algebra on Cantor space, by the monotone class theorem. I'm not sure how much measure theory you know, but you can think of this as a form of induction on measurable sets; if a property holds on an algebra of sets and it is preserved by countable monotone unions and intersections, then it holds on every set in the -algebra generated by . For countable monotone unions, if we have , then We can do the same thing for countable monotone intersections ;

EDIT: I'm talking about probability distributions rather than semimeasures. I don't know how your definition of dominance is helpful for semimeasures though. The universal semimeasure is defined on binary sequences, and I think that question of whether dominates depends on how you extend it to the whole -algebra. I expect that many reasonable extensions of would satisfy your Theorem 1, but this post (in particular the proof of Theorem 1) doesn't seem to choose a specific such extension.

Comment by SamEisenstat on Minimal intuitionistic logic as a setting for logical counterfactuals · 2015-07-30T21:38:25.000Z · LW · GW

Haskell doesn't actually let you do this as far as I can tell, but the natural computational way to implement a function is with a case expression with no cases. This is sensible because has no constructors, so it should be possible to pattern match it with such an expression. Another way of thinking of this is that we can use pattern matching on sum types and is just the -ary sum type.

Comment by SamEisenstat on Optimal and Causal Counterfactual Worlds · 2015-07-18T01:07:39.000Z · LW · GW

Condition 4 in your theorem coincides with Lewis' account of counterfactuals. Pearl cites Lewis, but he also criticizes him on the ground that the ordering on worlds is too arbitrary. In the language of this post, he is saying that condition 2 arises naturally from the structure of the problem and that condition 4 is derives from the deeper structure corresponding to condition 2.

I also noticed that the function and the partial order can be read as "time of first divergence from the real world" and "first diverges before", respectively. This makes the theorem a lot more intuitive.

Comment by SamEisenstat on A Counterexample to an Informal Conjecture on Proof Length and Logical Counterfactuals · 2015-07-17T17:50:30.000Z · LW · GW

We can also construct an example where with a short proof and also proves , but any such proof is much longer. We only need to put a bound on the proof length in A's proof search. Then, the argument that proves its own consistency still works, and is rather short: as the proof length bound increases. However, there cannot be a proof of within 's proof length bound, since if it found one it would immediately take action 1. In this case can still prove that simply by running the agent, but this argument shows that any such proof must be long.

Comment by SamEisenstat on An Informal Conjecture on Proof Length and Logical Counterfactuals · 2015-05-16T21:05:08.000Z · LW · GW

The argument that I had in mind was that if , then , so since PA knows how the chicken rule works. This gives us , so PA can prove that if , then PA is inconsistent. I'll include this argument in my post, since you're right that this was too big a jump.

Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn't mention this explicitly, since I felt like I had seen it before often enough, but now I realize it is nontrivial enough to point out.

Comment by SamEisenstat on An Informal Conjecture on Proof Length and Logical Counterfactuals · 2015-05-16T07:26:29.000Z · LW · GW

I (just yesterday) found a counterexample to this. The universe is a 5-and-10 variant that uses the unprovability of consistency:

def U():
if A() == 2:
if PA is consistent:
return 10
else:
return 0
else:
return 5


The agent can be taken to be modal UDT, using PA as its theory. (The example will still work for other theories extending PA, we just need the universe's theory to include the agents. Also, to simplify some later arguments we suppose that the agent uses the chicken rule, and that it checks action 1 first, then action 2.) Since the agent cannot prove the consistency of its theory, it will not be able to prove , so the first implication which it can prove is . Thus, it will end up taking action 1.

Now, we work in PA and try to show . If PA is inconsistent (we have to account for this case since we are working in PA), then follows straightforwardly. Next, we consider the case that PA is consistent and work through the agent's decision. PA can't prove , since we used the chicken rule, so since the sentence is easily provable, the sentence (ie. the first sentence that the agents checks for proofs of) must be unprovable.

The next sentence we check is . If the agent finds a proof of this, then it takes action 2. Otherwise, it moves on to the sentence , which is easily provable as mentioned above, and it takes action 1. Hence, the agent takes action 2 iff it can prove , so Löb's theorem tells us that so, by the uniqueness of fixed points, it follows that . Then, we get , so by the definition of the universe, and so by Gödel's second incompleteness theorem. Thus, if the agent takes action 2, then PA is inconsistent, so as desired.

This tells us that . Also, by the chicken rule, so . Since PA does not prove at all, the shortest proof of is much shorter than the shortest proof of for any definition of "much shorter". (One can object here that there is no shortest proof, but (a) it seems natural to define the "length of the shortest proof" to be infinite if there is no proof, and (b) it is probably straightforward but tedious to modify the agent and universe so that there is a proof of , but it is very long.)

However, it is clear that is not a legitimate counterfactual consequence of . Informally, if the agent chose action 2, it would have received utility 10, since PA is consistent. Thus, we have a counterexample.

One issue we discussed during the workshop is whether counterfactuals should be defined with respect to a state of knowledge. We may want to say here that we, who know a lot, are in a state of knowledge with respect to which would counterfactually result in , but that someone who reasons in PA is in a state of knowledge w.r.t. which it would result in . One way to think about this is that we know that PA is obviously consistent, irrespective of how the agent acts, whereas PA does not know that it is consistent, allowing an agent using PA to think of itself as counterfactually controlling PA's consistency. Indeed, this is roughly how the argument above proceeds.

I'm not sure that this is a good way of thinking about this though. The agents goes through some weird steps, most notably a rather opaque application of the fixed point theorem, so I don't have a good feel for why it is reasoning this way. I want to unwrap that argument before I can say whether it's doing something that, on an intuitive level constitutes legitimate counterfactual reasoning.

More worryingly, the perspective of counterfactuals as being defined w.r.t. states of knowledge seems to be at odds with PA believing a wrong counterfactual here. It would make sense for PA not to have enough information to make any statement about the counterfactual consequences of , but that's not what's happening if we think of PA's counterfactuals as obeying this conjecture; instead, PA postulates a causal mechanism by which the agent controls the consistency of PA, which we didn't expect to be there at all. Maybe it would all make sense if I had a deeper understanding of the proof I gave, but right now it is very odd.

(This is rather long; perhaps it should be a post? Would anyone prefer that I clean up a few things and make this a post? I'll also expand on the issue I mention at the end when I have more time to think about it.)