Posts
Comments
I think a lot of this is factual knowledge. There are five publicly available questions from the FrontierMath dataset. Look at the last of these, which is supposed to be the easiest. The solution given is basically "apply the Weil conjectures". These were long-standing conjectures, a focal point of lots of research in algebraic geometry in the 20th century. I couldn't have solved the problem this way, since I wouldn't have recalled the statement. Many grad students would immediately know what to do, and there are many books discussing this, but there are also many mathematicians in other areas who just don't know this.
In order to apply the Weil conjectures, you have to recognize that they are relevant, know what they say, and do some routine calculation. As I suggested, the Weil conjectures are a very natural subject to have a problem about. If you know anything about the Weil conjectures, you know that they are about counting points of varieties over a finite field, which is straightforwardly what the problems asks. Further, this is the simplest case, that of a curve, which is e.g. what you'd see as an example in an introduction to the subject.
Regarding the calculation, parts of it are easier if you can run some code, but basically at this point you've following a routine pattern. There are definitely many examples of someone working out what the Weil conjectures say for some curve in the training set.
Further, asking Claude a bit, it looks like are particularly common cases here. So, if you skip some of the calculation and guess, or if you make a mistake, you have a decent chance of getting the right answer by luck. You still need the sign on the middle term, but that's just one bit of information. I don't understand this well enough to know if there's a shortcut here without guessing.
Overall, I feel that the benchmark has been misrepresented. If this problem is representative, it seems to test broad factual knowledge of advanced mathematics more than problem-solving ability. Of course, this question is marked as the easiest of the listed ones. Daniel Litt says something like this about some other problems as well, but I don't really understand how routine he's saying that they are, are I haven't tried to understand the solutions myself.
Well, I guess describing a model of a computably enumerable theory, like PA or ZFC counts. We could also ask for a model of PA that's nonstandard in a particular way that we want, e.g. by asking for a model of , and that works the same way. Describing a reflective oracle has low solutions too, though this is pretty similar to the consistent guessing problem. Another one, which is really just a restatement of the low basis theorem, but perhaps a more evocative one, is as follows. Suppose some oracle machine has the property that there is some oracle that would cause it to run forever starting from an empty tape. Then, there is a low such oracle.
(Technically, these aren't decision problems, since they don't tell us what the right decision is, but just give us conditions that whatever decisions we make have to satisfy. I don't know what to say instead; this is more general then e.g. promise problems. Maybe I'd use something like decision-class problems?)
All these have a somewhat similar flavour by the nature of the low basis theorem. We can enumerate a set of constraints, but we can't necessarily compute a single object satisfying all the constraints. But the theorem tells us that there's a low such object.
I don't know what the situation is for subsets of the digits of Chaitin's constant. Can it be as hard as the halting problem? You might try to refute this using some sort of incompressibility idea. Can it be low? I'd expect not, at least for computable subsets of indices of positive density. Plausibly computability theorists know about this stuff. They do like constructing posets of Turing degrees of various shapes, and they know about which shapes can be realized between and the halting degree . (E.g. this paper.)
Yeah, there's a sort of trick here. The natural question is uniform--we want a single reduction that can work from any consistent guessing oracle, and we think it would be cheating to do different things with different oracles. But this doesn't matter for the solution, since we produce a single consistent guessing oracle that can't be reduced to the halting problem.
This reminds me of the theory of enumeration degrees, a generalization of Turing degrees allowing open-set-flavoured behaviour like we see in partial recursive functions; if the answer to an oracle query is positive, the oracle must eventually tell you, but if the answer is negative it keeps you waiting indefinitely. I find the theory of enumeration degrees to be underemphasized in discussion of computability theory, but e.g. Odifreddi has a chapter on it all the way at the end of Classical Recursion Theory Volume II.
The consistent guessing problem isn't a problem about enumeration degrees. It's using a stronger kind of uniformity--we want to be uniform over oracles that differently guess consistently, not over a set of ways to give the same answers, but to present them differently. But there is again a kind of strangeness in the behaviour of uniformity, in that we get equivalent notions if we do or do not ask that a reduction between sets , be a single function that uniformly enumerates from enumerations of , so there might be some common idea here. More generally, enumeration degrees feel like they let us express more naturally things that are a bit awkward to say in terms of Turing degrees--it's natural to think about the set of computations that are enumerable/ in a set--so it might be a useful keyword.
Note that Andy Drucker is not claiming to have discovered this; the paper you link is expository.
Since Drucker doesn't say this in the link, I'll mention that the objects you're discussing are conventionally know as PA degrees. The PA here stands for Peano arithmetic; a Turing degree solves the consistent guessing problem iff it computes some model of PA. This name may be a little misleading, in that PA isn't really special here. A Turing degree computes some model of PA iff it computes some model of ZFC, or more generally any theory capable of expressing arithmetic.
Drucker also doesn't mention the name of the theorem that this result is a special case of: the low basis theorem. "Low" here suggests low computability strength. Explicitly, a Turing degree is low if solving the halting problem for machines with an oracle for is equivalent (in the sense of reductions) to solving the halting problem for Turing machines without any oracle. The low basis theorem says that every computable binary tree has a low path. We are able to apply the theorem to this problem, concluding that there is a consistent guessing oracle which is low. So, we cannot use this oracle to solve the halting problem; if we could, then an oracle machine with access to would be at least as strong as an oracle machine with access to the halting set, but we know that the halting set suffices to compute the halting problem for such a machine, which is a contradiction.
Various other things are known about PA degrees, though I'm not sure what might be of interest to you or others here. This stuff is discussed in books on computability theory, like Robert Soare's Computability Theory and Applications. Though, I thought I learned about PA degrees from his earlier book, but now I don't see them in there, so maybe I just learned about PA degrees around the same time, possibly following my interest in your and others' work on reflective oracles. The basics of computability theory--Turing degrees, the Turing jump, and the arithmetic hierarchy in the computability sense--may be of interest to the extent there is anything there that you're not already familiar with. With regard to PA degrees, in particular people like to talk about diagonally nonrecursive functions. This works as follows. Let denote the th partial computable function according to some Goedel numbering. The PA degrees are exactly the Turing degrees that compute functions such that for all numbers at which the right-hand side is defined. This is suggestive of the ideas around reflective oracles, the Lawvere fixed-point theorem, etc. But I wouldn't say that when I think about these things, I think of them in terms of diagonally nonrecursive functions; plausibly that's not an interesting direction to point people in.
I haven't read too closely, but it looks like the equivalence relation that you're talking about in the post sets elements that are scalar multiples of each other in equivalence. This isn't the point of my equivalence; the stuff I wrote is all in terms of vectors, not directions. My other top-level comment discusses this.
Yeah, this could be clearer. The point is that 1/(c(v+ + w−))*(v+ + w−) and 1/(c(v+ + w−))*(v- + w+) are formal sums of elements of L. These formal sums have positive coefficients which sum to 1, so they represent convex combinations. But their not equal as formal sums, only the results of applying the convex combination operation of L are equal.
We can then quotient by this relation to get a vector space
I think you're confusing two different parts here. There's a quotient of a vector space to get a vector space, which is done to embed $\mathcal{L}$ in a vector space. There's also something sort of like a projectivization, which does not produce a vector space. In the method I prefer, there isn't an explicit quotient, but instead just functions on the vector space that satisfy certain properties. (I could see being convinced to prefer the other version if it did improve the presentation.)
of differences of lotteries
Is this supposed to be the square of the space of lotteries? The square would correspond to formal differences, but actual differences would be a different space.
The point of my construction with formal differences is that differences of lotteries are not defined a priori. If we embed $\mathcal{L}$ in a vector space then we have already done what my construction is for. This is all in https://link.springer.com/article/10.1007/BF02413910 in some form, and many other places.
Happy to talk more about this.
Q5 is true if (as you assumed), the space of lotteries is the space of distributions over a finite set. (For a general convex set, you can get long-line phenomena.)
First, without proof, I'll state the following generalization.
Theorem 1. Let be a relation on a convex space satisfying axioms A1, A2, A3, and the following additional continuity axiom. For all , the set
is open in . Then, there exists a function from to the long line such that iff .
The proof is not too different, but simpler, if we also assume A4. In particular, we no longer need the extra continuity axiom, and we get a stronger conclusion. Nate sketched part of the proof of this already, but I want to be clearer about what is stated and skip fewer steps. In particular, I'm not sure how Nate's hypotheses rule out examples that require long-line-valued functions—maybe he's assuming that the domain of the preference relation is a finite-dimensional simplex like I am, but none of his arguments use this explicitly.
Theorem 2. Let be a relation on a finite-dimensional simplex satisfying axioms A1-A4. Then, there is a quasiconcave function such that iff .
First, I'll set up some definitions and a lemma. For any lotteries , , let denote the line segment
We say that preferences are increasing along a line segment if whenever , we have
We will also use open and half-open interval notation in the corresponding way.
Lemma. Let be a preference relation on a finite-dimensional simplex satisfying axioms A1-A4. Then, there are -minimal and -maximal elements in .
Proof. First, we show that there is a minimal element. Axiom A4 states that for any mixture , either or . By induction, it follows more generally that any convex combination C of finitely many elements satisfies for some . But every element is a convex combination of the vertices of , so some vertex of is -minimal.
The proof that there is a maximal element is more complex. Consider the family of sets
This is a prefilter, so since is compact ( here carries the Euclidean metric), it has a cluster point . Either will be a maximal element, or we will find some other maximal element. In particular, take any . We are done if is a maximal element; otherwise, pick . By the construction of , for every , we can pick some within a distance of from B. Now, if we show that itself satisfies , it will follow that is maximal.
The idea is to pass from our sequence , with limit , to another sequence lying on a line segment with endpoint . We can use axiom A4, which is a kind of convexity, to control the preference relation on convex combinations of our points , so these are the points that we will construct along a line segment. Once we have this line segment, we can finish by using A3, which is a kind of continuity restricted to line segments, to control itself.
Let be the set of lotteries in the affine span of the set . Then, if we take some index set such that is a maximal affinely independent tuple, it follows that affinely generates . Hence, the convex combination
i.e. the barycenter of the simplex with vertices at , is in the interior of the convex hull of relative to , so we can pick some such that the -ball around relative to is contained in this simplex.
Now, we will see that every lottery in the set satisfies . For any , pick so that is in the -ball around . Since the tangent vector has length less than , the lottery
is in the -ball around , and it is in , so it is in the simplex with vertices . Then, by A4, and by hypothesis. So, applying A4 again,
Using A4 one more time, it follows that every lottery
satisfies , and hence every lottery .
Now we can finish up. If then, using A3 and the fact that , there would have to be some lottery in that is -equivalent to A, but this would contradict what we just concluded. So, , and so B is -maximal.
Proof of Theorem 2. Let be a -minimal and a -maximal element of . First, we will see that preferences are increasing on , and then we will use this fact to construct a function and show that it has the desired properties. Suppose preferences we not increasing; then, there would be such that is closer to while is closer to , and . Then, would be a convex combination of and , but by the maximality of , contradicting A4.
Now we can construct our utility function using A3; for each -class , we have , so there is some[1] such that
Then, let for all . Since preferences are increasing on , it is immediate that if , then . Conversely, if , we have two cases. If , then , so , and so . Finally, if , then by construction.
Finally, since for all we have iff , it follows immediately that is quasiconcave by A4.
- ^
Nate mentions using choice in his answer, but here at least the use of choice is removable. Since is monotone on , the intersection of the -class with is a subinterval of , so we can pick based on the midpoint of that interval
Nice, I like this proof also. Maybe there's a clearer way to say thing, but your "unrolling one step" corresponds to my going from to . We somehow need to "look two possible worlds deep".
Here's a simple Kripke frame proof of Payor's lemma.
Let be a Kripke frame over our language, i.e. is a set of possible worlds, is an accessibility relation, and judges that a sentence holds in a world. Now, suppose for contradiction that but that , i.e. does not hold in some world .
A bit of De Morganing tells us that the hypothesis on is equivalent to , so . So, there is some world with such that . But again looking at our equivalent form for , we see that , so , a contradiction.
Both this proof and the proof in the post are very simple, but at least for me I feel like this proof tells me a bit more about what's going on, or at least tells me something about what's going on that the other doesn't. Though in a broader sense there's a lot I don't know about what's going on in modal fixed points.
Kripke frame-style semantics are helpful for thinking about lots of modal logic things. In particular, there are cool inductiony interpretations of the Gödel/Löb theorems. These are more complicated, but I'd be happy to talk about them sometime.
Theorem. Weak HCH (and similar proposals) contain EXP.
Proof sketch: I give a strategy that H can follow to determine whether some machine that runs in time accepts. Basically, we need to answer questions of the form "Does cell have value at time ." and "Was the head in position at time ?", where and are bounded by some function in . Using place-system representations of and , these questions have length in , so they can be asked. Further, each question is a simple function of a constant number of other such questions about earlier times as long as , and can be answered directly in the base case .
*I* think that there's a flaw in the argument.
I could elaborate, but maybe you want to think about this more, so for now I'll just address your remark about , where is refutable. If we assume that , then, since is false, must be false, so must be true. That is, you have proven that PA proves , that is, since is contradictory, PA proves its own inconsistency. You're right that this is compatible with PA being consistent--PA may be consistent but prove its own inconsistency--but this should still be worrying.
This reminds me of the Discourse on Method.
[T]here is seldom so much perfection in works composed of many separate parts, upon which different hands had been employed, as in those completed by a single master. Thus it is observable that the buildings which a single architect has planned and executed, are generally more elegant and commodious than those which several have attempted to improve, by making old walls serve for purposes for which they were not originally built. Thus also, those ancient cities which, from being at first only villages, have become, in course of time, large towns, are usually but ill laid out compared with the regularity constructed towns which a professional architect has freely planned on an open plain; so that although the several buildings of the former may often equal or surpass in beauty those of the latter, yet when one observes their indiscriminate juxtaposition, there a large one and here a small, and the consequent crookedness and irregularity of the streets, one is disposed to allege that chance rather than any human will guided by reason must have led to such an arrangement. And if we consider that nevertheless there have been at all times certain officers whose duty it was to see that private buildings contributed to public ornament, the difficulty of reaching high perfection with but the materials of others to operate on, will be readily acknowledged. In the same way I fancied that those nations which, starting from a semi-barbarous state and advancing to civilization by slow degrees, have had their laws successively determined, and, as it were, forced upon them simply by experience of the hurtfulness of particular crimes and disputes, would by this process come to be possessed of less perfect institutions than those which, from the commencement of their association as communities, have followed the appointments of some wise legislator. It is thus quite certain that the constitution of the true religion, the ordinances of which are derived from God, must be incomparably superior to that of every other. And, to speak of human affairs, I believe that the pre-eminence of Sparta was due not to the goodness of each of its laws in particular, for many of these were very strange, and even opposed to good morals, but to the circumstance that, originated by a single individual, they all tended to a single end. In the same way I thought that the sciences contained in books (such of them at least as are made up of probable reasonings, without demonstrations), composed as they are of the opinions of many different individuals massed together, are farther removed from truth than the simple inferences which a man of good sense using his natural and unprejudiced judgment draws respecting the matters of his experience. And because we have all to pass through a state of infancy to manhood, and have been of necessity, for a length of time, governed by our desires and preceptors (whose dictates were frequently conflicting, while neither perhaps always counseled us for the best), I farther concluded that it is almost impossible that our judgments can be so correct or solid as they would have been, had our reason been mature from the moment of our birth, and had we always been guided by it alone.
It is true, however, that it is not customary to pull down all the houses of a town with the single design of rebuilding them differently, and thereby rendering the streets more handsome; but it often happens that a private individual takes down his own with the view of erecting it anew, and that people are even sometimes constrained to this when their houses are in danger of falling from age, or when the foundations are insecure. With this before me by way of example, I was persuaded that it would indeed be preposterous for a private individual to think of reforming a state by fundamentally changing it throughout, and overturning it in order to set it up amended; and the same I thought was true of any similar project for reforming the body of the sciences, or the order of teaching them established in the schools: but as for the opinions which up to that time I had embraced, I thought that I could not do better than resolve at once to sweep them wholly away, that I might afterwards be in a position to admit either others more correct, or even perhaps the same when they had undergone the scrutiny of reason. I firmly believed that in this way I should much better succeed in the conduct of my life, than if I built only upon old foundations, and leaned upon principles which, in my youth, I had taken upon trust. For although I recognized various difficulties in this undertaking, these were not, however, without remedy, nor once to be compared with such as attend the slightest reformation in public affairs. Large bodies, if once overthrown, are with great difficulty set up again, or even kept erect when once seriously shaken, and the fall of such is always disastrous. Then if there are any imperfections in the constitutions of states (and that many such exist the diversity of constitutions is alone sufficient to assure us), custom has without doubt materially smoothed their inconveniences, and has even managed to steer altogether clear of, or insensibly corrected a number which sagacity could not have provided against with equal effect; and, in fine, the defects are almost always more tolerable than the change necessary for their removal; in the same manner that highways which wind among mountains, by being much frequented, become gradually so smooth and commodious, that it is much better to follow them than to seek a straighter path by climbing over the tops of rocks and descending to the bottoms of precipices.
...
And finally, as it is not enough, before commencing to rebuild the house in which we live, that it be pulled down, and materials and builders provided, or that we engage in the work ourselves, according to a plan which we have beforehand carefully drawn out, but as it is likewise necessary that we be furnished with some other house in which we may live commodiously during the operations, so that I might not remain irresolute in my actions, while my reason compelled me to suspend my judgement, and that I might not be prevented from living thenceforward in the greatest possible felicity, I formed a provisory code of morals, composed of three or four maxims, with which I am desirous to make you acquainted.
The first was to obey the laws and customs of my country, adhering firmly to the faith in which, by the grace of God, I had been educated from my childhood and regulating my conduct in every other matter according to the most moderate opinions, and the farthest removed from extremes, which should happen to be adopted in practice with general consent of the most judicious of those among whom I might be living. For as I had from that time begun to hold my own opinions for nought because I wished to subject them all to examination, I was convinced that I could not do better than follow in the meantime the opinions of the most judicious; and although there are some perhaps among the Persians and Chinese as judicious as among ourselves, expediency seemed to dictate that I should regulate my practice conformably to the opinions of those with whom I should have to live; and it appeared to me that, in order to ascertain the real opinions of such, I ought rather to take cognizance of what they practised than of what they said, not only because, in the corruption of our manners, there are few disposed to speak exactly as they believe, but also because very many are not aware of what it is that they really believe; for, as the act of mind by which a thing is believed is different from that by which we know that we believe it, the one act is often found without the other. Also, amid many opinions held in equal repute, I chose always the most moderate, as much for the reason that these are always the most convenient for practice, and probably the best (for all excess is generally vicious), as that, in the event of my falling into error, I might be at less distance from the truth than if, having chosen one of the extremes, it should turn out to be the other which I ought to have adopted. And I placed in the class of extremes especially all promises by which somewhat of our freedom is abridged; not that I disapproved of the laws which, to provide against the instability of men of feeble resolution, when what is sought to be accomplished is some good, permit engagements by vows and contracts binding the parties to persevere in it, or even, for the security of commerce, sanction similar engagements where the purpose sought to be realized is indifferent: but because I did not find anything on earth which was wholly superior to change, and because, for myself in particular, I hoped gradually to perfect my judgments, and not to suffer them to deteriorate, I would have deemed it a grave sin against good sense, if, for the reason that I approved of something at a particular time, I therefore bound myself to hold it for good at a subsequent time, when perhaps it had ceased to be so, or I had ceased to esteem it such.
My second maxim was to be as firm and resolute in my actions as I was able, and not to adhere less steadfastly to the most doubtful opinions, when once adopted, than if they had been highly certain; imitating in this the example of travelers who, when they have lost their way in a forest, ought not to wander from side to side, far less remain in one place, but proceed constantly towards the same side in as straight a line as possible, without changing their direction for slight reasons, although perhaps it might be chance alone which at first determined the selection; for in this way, if they do not exactly reach the point they desire, they will come at least in the end to some place that will probably be preferable to the middle of a forest. In the same way, since in action it frequently happens that no delay is permissible, it is very certain that, when it is not in our power to determine what is true, we ought to act according to what is most probable; and even although we should not remark a greater probability in one opinion than in another, we ought notwithstanding to choose one or the other, and afterwards consider it, in so far as it relates to practice, as no longer dubious, but manifestly true and certain, since the reason by which our choice has been determined is itself possessed of these qualities. This principle was sufficient thenceforward to rid me of all those repentings and pangs of remorse that usually disturb the consciences of such feeble and uncertain minds as, destitute of any clear and determinate principle of choice, allow themselves one day to adopt a course of action as the best, which they abandon the next, as the opposite.
(This is probably 5% of the text. There is more interesting stuff there, but it's less relevant to this post.)
As you say, this isn't a proof, but it wouldn't be too surprising if this were consistent. There is some such that has a proof of length by a result of Pavel Pudlák (On the length of proofs of finitistic consistency statements in first order theories). Here I'm making the dependence on explicit, but not the dependence on . I haven't looked at it closely, but the proof strategy in Theorems 5.4 and 5.5 suggests that will not depend on , as long as we only ask for the weaker property that will only be provable in length for sentences of length at most .
I misunderstood your proposal, but you don't need to do this work to get what you want. You can just take each sentence as an axiom, but declare that this axiom takes symbols to invoke. This could be done by changing the notion of length of a proof, or by taking axioms and with very long.
Yeah, I had something along the lines of what Paul said in mind. I wanted not to require that the circuit implement exactly a given function, so that we could see if daemons show up in the output. It seems easier to define daemons if we can just look at input-output behaviour.
I'm having trouble thinking about what it would mean for a circuit to contain daemons such that we could hope for a proof. It would be nice if we could find a simple such definition, but it seems hard to make this intuition precise.
For example, we might say that a circuit contains daemons if it displays more optimization that necessary to solve a problem. Minimal circuits could have daemons under this definition though. Suppose that some function describes the behaviour of some powerful agent, a function is like with noise added, and our problem is to predict sufficiently well the function . Then, the simplest circuit that does well won't bother to memorize a bunch of noise, so it will pursue the goals of the agent described by more efficiently than , and thus more efficiently than necessary.
Two minor comments. First, the bitstrings that you use do not all correspond to worlds, since, for example, implies , as is a subtheory of . This can be fixed by instead using a tree of sentences that all diagonalize against themselves. Tsvi and I used a construction in this spirit in A limit-computable, self-reflective distribution, for example.
Second, I believe that weakening #2 in this post also cannot be satisfied by any constant distribution. To sketch my reasoning, a trader can try to buy a sequence of sentences , spending $$2^{-n}$ on the \(n\)th sentence \(\phi_1 \wedge \dots \wedge \phi_n\). It should choose the sequence of sentences so that \(\phi_1 \wedge \dots \wedge \phi_n\) has probability at most \(2^{-n}\), and then it will make an infinite amount of money if the sentences are simultaneously true.
The way to do this is to choose each from a list of all sentences. If at any point you notice that has too high a probability, then pick a new sentence for . We can sell all the conjunctions for and get back the original amount payed by hypothesis. Then, if we can keep using sharper continuous tests of the probabilities of the sentences over time, we will settle down to a sequence with the desired property.
In order to turn this sketch into a proof, we need to be more careful about how these things are to be made continuous, but it seems routine.
I at first didn't understand your argument for claim (2), so I wrote an alternate proof that's a bit more obvious/careful. I now see why it works, but I'll give my version below for anyone interested. In any case, what you really mean is the probability of deciding a sentence outside of by having it announced by nature; there may be a high probability of sentences being decided indirectly via sentences in .
Instead of choosing as you describe, pick so that the probability of sampling something in is greater than . Then, the probability of sampling something in is at least . Hence, no matter what sentences have been decided already, the probability that repeatedly sampling from selects before it selects any sentence outside of is at least
as desired.
Furthermore, this argument makes it clear that the probability distribution we converge to depends only on the set of sentences which the environment will eventually assert, not on their ordering!
Oh, I didn't notice that aspect of things. That's pretty cool.
A few thoughts:
I agree that the LI criterion is "pointwise" in the way that you describe, but I think that this is both pretty good and as much as could actually be asked. A single efficiently computable trader can do a lot. It can enforce coherence on a polynomially growing set of sentences, search for proofs using many different proof strategies, enforce a polynomially growing set of statistical patterns, enforce reflection properties on a polynomially large set of sentences, etc. So, eventually the market will not be exploitable on all these things simultaneously, which seems like a pretty good level of accurate beliefs to have.
On the other side of things, it would be far to strong to ask for a uniform bound of the form "for every , there is some day such that after step , no trader can multiply its wealth by a factor more than ". This is because a trader can be hardcoded with arbitrarily many hard-to-compute facts. For every , there must eventually be a day on which the belief of your logical inductor assign probability less than to some true statement, at which point a trader who has that statement hardcoded can multiply its wealth by . (I can give a construction of such a sentence using self-reference if you want, but it's also intuitively natural - just pick many mutually exclusive statements with nothing to break the symmetry.)
Thus, I wouldn't think of traders as "mistakes", as you do in the post. A trader can gain money on the market if the market doesn't already know all facts that will be listed by the deductive process, but that is a very high bar. Doing well against finitely many traders is already "pretty good".
What you can ask for regarding uniformity is for some simple function such that any trader can multiply its wealth by at most a factor . This is basically the idea of the mistake bound model in learning theory; you bound how many mistakes happen rather than when they happen. This would let you say a more than the one-trader properties I mentioned in my first paragraph. In fact, has this propery; is just the initial wealth of the trader. You may therefore want to do something like setting traders' initial wealths according to some measure of complexity. Admittedly this isn't made explicit in the paper, but there's not much additional that needs to be done to think in this way; it's just the combination of the individual proofs in the paper with the explicit bounds you get from the initial wealths of the traders involved.
I basically agree completely on your last few points. The traders are a model class, not an ensemble method in any substantive way, and it is just confusing to connect them to the papers on ensemble methods that the LI paper references. Also, while I use the idea of logical induction to do research that I hope will be relevant to practical algorithms, it seems unlikely than any practical algorithm will look much like a LI. For one thing, finding fixed points is really hard without some property stronger than continuity, and you'd need a pretty good reason to put it in the inner loop of anything.
Universal Prediction of Selected Bits solves the related question of what happens if the odd bits are adversarial but the even bits copy the preceding odd bits. Basically, the universal semimeasure learns to do the right thing, but the exact sense in which the result is positive is subtle and has to do with the difference between measures and semimeasures. The methods may also be relevant to the questions here, though I don't see a proof for either question yet.
Yeah, I like tail dependence.
There's this question of whether for logical uncertainty we should think of it more as trying to "un-update" from a more logically informed perspective rather than trying to use some logical prior that exists at the beginning of time. Maybe you've heard such ideas from Scott? I'm not sure if that's the right perspective, but it's what I'm alluding to when I say you're introducing artificial logical uncertainty.
Yeah, the 5 and 10 problem in the post actually can be addressed using provability ideas, in a way that fits in pretty natually with logical induction. The motivation here is to work with decision problems where you can't prove statements for agent , utility function , action , and utility value , at least not with the amount of computing power provided, but you want to use inductive generalizations instead. That isn't necessary in this example, so it's more of an illustration.
To say a bit more, if you make logical inductors propositionally consistent, similarly to what is done in this post, and make them assign things that have been proven already probability 1, then they will work on the 5 and 10 problem in the post.
It would be interesting if there was more of an analogy to explore between the provability oracle setting and the inductive setting, and more ideas could be carried over from modal UDT, but it seems to me that this is a different kind of problem that will require new ideas.
It's hard to analyze the dynamics of logical inductors too precisely, so we often have to do things that feel like worst-case analysis, like considering an adversarial trader with sufficient wealth. I think that problems that show up from this sort of analysis can be expected to correspond to real problems in superintelligent agents, but that is a difficult question. The malignancy of the universal prior is part of the reason.
As to your proposed solution, I don't see how it would work. Scott is proposing that the trader makes conditional contracts, which are in effect voided if the event that they are conditional on doesn't happen, so the trader doesn't actually lose anything is this case. (It isn't discussed in this post, but conditional contracts can be built out of the usual sort of bets, with payoffs given by the definition of conditional probability.) So, in order to make the trader lose money, the events need to happen sometimes, not just be expect to happen with some nonnegligable probability by the market.
In counterfactual mugging with a logical coin, AsDT always uses a logical inductor’s best-estimate of the utility it would get right now, so it sees the coin as already determined, and sees no benefit from giving Omega money in the cases where Omega asks for money.
The way I would think about what's going on is that if the coin is already known at the time that the expectations are evaluated, then the problem isn't convergent in the sense of AsDT. The agent that pays up whenever asked has a constant action, but it doesn't receive a constant expected utility. You can think of the averaging as introducing artificial logical uncertainty to make more things convergent, which is why it's more updateless. (My understanding is that this is pretty close to how you're thinking of it already.)
This isn't too related to your main point, but every ordered field can be embedded into a field of Hahn series, which might be simpler to work with than surreals.
That page discusses the basics of Hahn series, but not the embedding theorem. (Ehrlich, 1995) treats things in detail, but is long and introduces a lot of definitions. The embedding theorem is stated on page 23 (24 in the pdf).
Nicely done. I should have spent more time thinking about liar sentences; you really can do a lot with them.
Follow-up question - is the set of limits of logical inductors convex? (Your proof also makes me curious as to whether the set of "expanded limits" is convex, though that question is a bit less nice than the other).
I'm copying over some (lightly edited) conversation about this post from Facebook.
Sam Eisenstat: In the original counterexample to the Trolljecture, I guess you're proposing that be logically prior to , so we can go back and replace with , but still have to use in order to derive ? Here I'm just using "" to mean "counterfactually results in" without meaning to imply that this should be defined modally.
I agree that this is intuitively sensible, but it is difficult to give a general definition that agrees with intuition - it's hard to even have an intuition in the general case. There has been some work along these lines though; see for example https://agentfoundations.org/item?id=241 though you may already know about this since you've been talking with Scott. I'm pessimistic about this direction in general, though I'd be happy to talk further about it.
Jack Gallagher: What's the generator behind your being pessimistic?
Sam Eisenstat: Well, like Daniel suggests, we can ask whether the ordering is subjective/based on an agents epistemic state. Unlike Daniel, I think there are problems with this. The best examples are things like transparent Newcomb's problem, where the agent needs to counterfact on things that it already knows the fact of the matter about. You can try to find loopholes in this, e.g. maybe the agent is doing something wrong if it isn't ignorant of certain facts, but I haven't been able to find something that's satisfying (e.g. doesn't fall to other thought experiments).
It thus seems that an agent should just counterfact on things even though it already knows them to be true, but this is a weird sort of thing to call "epistemic". It doesn't just depend on the agent's probability assignments to various statements. Thus, to go farther here would seem to require a richer idea of "agent" than we presently have.
The alternative is an objective temporal ordering on logical statements. (This is a terrible use of the excluded middle, since "subjective" and "objective" are smuggling in a lot, so the place to look is probably somewhere that feels like a third option.) Anyway, it seems absurd for there to be a fact of the matter as to whether 417*596 = 248532 comes temporally before or after 251*622 = 156122.
Part of this is just a feeling of arbitrariness because even if one of those did come before the other, we don't have to tools to know which one. Still, it seems the tools we do have are the wrong shape for building something that would work. For example, there seems to be an affinity between the ideas of being "easy", of depending on few things, but being depended on by many, and of being temporally "early". Patrick's revived trolljecture is one attempt at formalizing it, but there was nothing deep about my counterexample, it's just easy to generate statements that are difficult to evaluate. (There are a different versions of this, like independent statements in undecidable theories, or hard SAT instances in propositional logic.)
I used the word "we" in the last paragraph, but maybe you have type-theory tools that are more suited to this?
Daniel Satanove: It's easier to come up with UDT when you have CDT. Is there a version of logical counterfactuals that works in more intuitive cases, but fail on stranger edge cases like transparent Newcomb's?
Sam Eisenstat: The EDT of logical counterfactuals is just EDT with a prior over logic (e.g. bounded Demski, something more sophisticated). Proof-based UDT is a special case of this; it prescribes actions if any reasonable prior would agree, and is silent otherwise.
Unfortunately, as shown by the trolljecture counterexample, there are counterfactuals that proof-based UDT gets wrong (the trolljecture basically says that what proof-based UDT does should be read as counterfactuals), and therefore that EDT gets wrong given any reasonable prior, so it's hard to define a “domain of validity” for this (not that that's necessary for it be informative).
One candidate for the CDT of logical counterfactuals would just be putting a causal DAG structure on logical statements, like discussed in Scott's forum post that I linked above. The TDT paper is informal, and I haven't looked at it in a while, but I think that you can read it as proposing exactly this.
I believe that in all thought experiments where people have an intuition about what should be done, they are using this sort of reasoning plus strategy selection (i.e. updatelessness). I would be interested in seeing any thought experiments that people do not analyze in this way.
There isn't much in the direction of actually defining such causal networks. Back when the trolljecture looked plausible we tried to interpret it in terms of causal networks, which didn't quite work (see http://epsilonofdoom.blogspot.com/2015/08/provability-counterfactuals-vs-three.html). I expect something like Patrick's revived trolljecture to be even harder to view in this way, since it's inherently approximate; a notion of approximate causal network would be more appropriate here, if one could be found. I don't know of any proposal that (1) actually defines a causal network and (2) works on some easy examples.
Well, that's not entirely true. You can just pick any ordering that puts the agents decision at the beginning, and use the resulting fully-connected DAG. This works wherever proof-based UDT works, but fails on the trolljecture, exactly because it assumes anything that provably follows from your agent's action is causally downstream of it.
Tsvi BT: "One candidate for the CDT of logical counterfactuals would just be putting a causal DAG structure on logical statements"
How does this interact with logical updatelessness? What happens to your DAG as you learn new facts?
Sam Eisenstat: I'm not sure if it's obvious, but the idea is that each node has a full table of counterfactuals. An instance of this has more data than just an ordering.
It's not intended to be updateless; it's a CDT/TDT-style theory rather than UDT-style. You could layer strategy selection on top of it (though it could be computationally difficult if your DAG is hard to work with).
When you learn new facts you condition the probability distribution associated with the DAG on them, like for any Bayesian network.
Tsvi BT: So then this is dynamically inconsistent in the counterfactual mugging with a logical coin.
Sam Eisenstat: Oh yeah, "layer strategy selection on top" doesn't work for logical uncertainty. Anyway, this was in response to Daniel's request for a theory that "works in more intuitive cases"; I hope I didn't give the impression that this would be hard to break.
On the other hand, you could try to "just do strategy selection" with respect to a particular prior if you're willing to treat everything you learn as an observation. This is easy if you pick a coherent prior, but it's uncomputable and it doesn't let you pay counterfactual muggers with logical coins. Abram's GLS stuff tries to do this with priors that don't already know all logical facts. What do you think of that sort of approach?
Tsvi BT: I don't have much more concrete to say. I currently view decision theory as being the questions of how to do logical updatelessness, and how to do strategy selection with a bounded reasoner. The GLS prior might be close, but there are basic open questions about how it works. "Strategic uncertainty" (or something) may or may not just boil down to logical uncertainty; the reason it might not is that you might actually have to define an optimization target other than "what Ideal Me thinks is the best strategy", to take into account that actual versions of you are bounded in different ways. These two "components" (I wildly speculate) could combine to give a reflectively stable decision theory.
Daniel Satanove: Proof lengths probably aren't the only way of doing the time step thing. Take the proof length definition of observation at time k, except delay all proofs of statements of the form a + b = c by one time step. This, or something sort of like this, will probably also work. Also any strictly increasing function on observation time should work.
The observation at time k should be an epistemic thing that describes how an agent learns of new theorems, rather than some fundamental property of math.
Jack Gallagher: What do you mean by "an epistemic thing"?
Daniel Satanove: "In order to break the cycles, we want some notion of logical time from which influence can only propagate forward. Proof length seems, at least to me, to be the most (only?) natural way to pull something like that off. A proof of length k can be thought of as a logical observation at time k."
You are restricting yourself to agents who do a breadth first search through proofs rather than anything that can do a more directed approach. Saying that you want some notion of "logical time" kind of sounds like you want something that is a property of the math rather than a property of the agent.
[EDIT: This all deals with measures, not semimeasures; see below.]
For to dominate in the old sense means that its Bayes score can only be some constant worse than , no worse, regardless of environment. My definition here implies that, but I think it’s a stricter requirement.
Your definition is equivalent to the standard definition. Li and Vitányi say that dominates iff there is some such that for any binary string , we have . Li and Vitányi's "probability distributions" take a binary string as input while the probability distributions we are using ask for an element of a -algebra, but we can abuse notation by allowing a binary string to represent the event of that the random sequence starts with this string as a prefix.
Theorem. For any probability distributions on Cantor space and any , the condition holds for all binary strings iff it holds for all .
Proof. The direction is immediate. For , suppose that for all binary string events . We first show that this property holds for every event in the algebra generated by the binary string events. We can write any as a disjoint union of binary string events . Then, as desired.
We can extend this to the -algebra generated by , which is just the Borel -algebra on Cantor space, by the monotone class theorem. I'm not sure how much measure theory you know, but you can think of this as a form of induction on measurable sets; if a property holds on an algebra of sets and it is preserved by countable monotone unions and intersections, then it holds on every set in the -algebra generated by . For countable monotone unions, if we have , then We can do the same thing for countable monotone intersections ;
EDIT: I'm talking about probability distributions rather than semimeasures. I don't know how your definition of dominance is helpful for semimeasures though. The universal semimeasure is defined on binary sequences, and I think that question of whether dominates depends on how you extend it to the whole -algebra. I expect that many reasonable extensions of would satisfy your Theorem 1, but this post (in particular the proof of Theorem 1) doesn't seem to choose a specific such extension.
Haskell doesn't actually let you do this as far as I can tell, but the natural computational way to implement a function is with a case expression with no cases. This is sensible because has no constructors, so it should be possible to pattern match it with such an expression. Another way of thinking of this is that we can use pattern matching on sum types and is just the -ary sum type.
Condition 4 in your theorem coincides with Lewis' account of counterfactuals. Pearl cites Lewis, but he also criticizes him on the ground that the ordering on worlds is too arbitrary. In the language of this post, he is saying that condition 2 arises naturally from the structure of the problem and that condition 4 is derives from the deeper structure corresponding to condition 2.
I also noticed that the function and the partial order can be read as "time of first divergence from the real world" and "first diverges before", respectively. This makes the theorem a lot more intuitive.
We can also construct an example where with a short proof and also proves , but any such proof is much longer. We only need to put a bound on the proof length in A's proof search. Then, the argument that proves its own consistency still works, and is rather short: as the proof length bound increases. However, there cannot be a proof of within 's proof length bound, since if it found one it would immediately take action 1. In this case can still prove that simply by running the agent, but this argument shows that any such proof must be long.
The argument that I had in mind was that if , then , so since PA knows how the chicken rule works. This gives us , so PA can prove that if , then PA is inconsistent. I'll include this argument in my post, since you're right that this was too big a jump.
Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn't mention this explicitly, since I felt like I had seen it before often enough, but now I realize it is nontrivial enough to point out.
I (just yesterday) found a counterexample to this. The universe is a 5-and-10 variant that uses the unprovability of consistency:
def U():
if A() == 2:
if PA is consistent:
return 10
else:
return 0
else:
return 5
The agent can be taken to be modal UDT, using PA as its theory. (The example will still work for other theories extending PA, we just need the universe's theory to include the agents. Also, to simplify some later arguments we suppose that the agent uses the chicken rule, and that it checks action 1 first, then action 2.) Since the agent cannot prove the consistency of its theory, it will not be able to prove , so the first implication which it can prove is . Thus, it will end up taking action 1.
Now, we work in PA and try to show . If PA is inconsistent (we have to account for this case since we are working in PA), then follows straightforwardly. Next, we consider the case that PA is consistent and work through the agent's decision. PA can't prove , since we used the chicken rule, so since the sentence is easily provable, the sentence (ie. the first sentence that the agents checks for proofs of) must be unprovable.
The next sentence we check is . If the agent finds a proof of this, then it takes action 2. Otherwise, it moves on to the sentence , which is easily provable as mentioned above, and it takes action 1. Hence, the agent takes action 2 iff it can prove , so Löb's theorem tells us that so, by the uniqueness of fixed points, it follows that . Then, we get , so by the definition of the universe, and so by Gödel's second incompleteness theorem. Thus, if the agent takes action 2, then PA is inconsistent, so as desired.
This tells us that . Also, by the chicken rule, so . Since PA does not prove at all, the shortest proof of is much shorter than the shortest proof of for any definition of "much shorter". (One can object here that there is no shortest proof, but (a) it seems natural to define the "length of the shortest proof" to be infinite if there is no proof, and (b) it is probably straightforward but tedious to modify the agent and universe so that there is a proof of , but it is very long.)
However, it is clear that is not a legitimate counterfactual consequence of . Informally, if the agent chose action 2, it would have received utility 10, since PA is consistent. Thus, we have a counterexample.
One issue we discussed during the workshop is whether counterfactuals should be defined with respect to a state of knowledge. We may want to say here that we, who know a lot, are in a state of knowledge with respect to which would counterfactually result in , but that someone who reasons in PA is in a state of knowledge w.r.t. which it would result in . One way to think about this is that we know that PA is obviously consistent, irrespective of how the agent acts, whereas PA does not know that it is consistent, allowing an agent using PA to think of itself as counterfactually controlling PA's consistency. Indeed, this is roughly how the argument above proceeds.
I'm not sure that this is a good way of thinking about this though. The agents goes through some weird steps, most notably a rather opaque application of the fixed point theorem, so I don't have a good feel for why it is reasoning this way. I want to unwrap that argument before I can say whether it's doing something that, on an intuitive level constitutes legitimate counterfactual reasoning.
More worryingly, the perspective of counterfactuals as being defined w.r.t. states of knowledge seems to be at odds with PA believing a wrong counterfactual here. It would make sense for PA not to have enough information to make any statement about the counterfactual consequences of , but that's not what's happening if we think of PA's counterfactuals as obeying this conjecture; instead, PA postulates a causal mechanism by which the agent controls the consistency of PA, which we didn't expect to be there at all. Maybe it would all make sense if I had a deeper understanding of the proof I gave, but right now it is very odd.
(This is rather long; perhaps it should be a post? Would anyone prefer that I clean up a few things and make this a post? I'll also expand on the issue I mention at the end when I have more time to think about it.)