Posts

UDT1.01: Logical Inductors and Implicit Beliefs (5/10) 2024-04-18T08:39:13.368Z
UDT1.01 Essential Miscellanea (4/10) 2024-04-14T02:23:38.755Z
UDT1.01: Plannable and Unplanned Observations (3/10) 2024-04-12T05:24:34.435Z
UDT1.01: Local Affineness and Influence Measures (2/10) 2024-03-31T07:35:52.831Z
UDT1.01: The Story So Far (1/10) 2024-03-27T23:22:35.170Z
Infrafunctions Proofs 2023-04-27T19:25:44.903Z
Infrafunctions and Robust Optimization 2023-04-27T19:25:11.662Z
Threat-Resistant Bargaining Megapost: Introducing the ROSE Value 2022-09-28T01:20:11.605Z
Infra-Exercises, Part 1 2022-09-01T05:06:59.373Z
Less Threat-Dependent Bargaining Solutions?? (3/2) 2022-08-20T02:19:11.405Z
Unifying Bargaining Notions (2/2) 2022-07-27T03:40:30.524Z
Unifying Bargaining Notions (1/2) 2022-07-25T00:28:27.572Z
Infra-Topology 2022-04-22T02:10:40.030Z
Infra-Miscellanea 2022-04-22T02:09:07.306Z
[Closed] Job Offering: Help Communicate Infrabayesianism 2022-03-23T18:35:16.790Z
Summary of the Acausal Attack Issue for AIXI 2021-12-13T08:16:26.376Z
Countably Factored Spaces 2021-09-09T04:24:58.090Z
Confusions re: Higher-Level Game Theory 2021-07-02T03:15:11.105Z
The Many Faces of Infra-Beliefs 2021-04-06T10:43:53.227Z
Inframeasures and Domain Theory 2021-03-28T09:19:00.366Z
Infra-Domain proofs 1 2021-03-28T09:16:42.546Z
Infra-Domain Proofs 2 2021-03-28T09:15:15.284Z
Dark Matters 2021-03-14T23:36:58.884Z
Less Basic Inframeasure Theory 2020-12-16T03:52:47.970Z
Basic Inframeasure Theory 2020-08-27T08:02:06.109Z
Belief Functions And Decision Theory 2020-08-27T08:00:51.665Z
Proofs Section 1.1 (Initial results to LF-duality) 2020-08-27T07:59:12.512Z
Proofs Section 1.2 (Mixtures, Updates, Pushforwards) 2020-08-27T07:57:27.622Z
Proofs Section 2.1 (Theorem 1, Lemmas) 2020-08-27T07:54:59.744Z
Proofs Section 2.2 (Isomorphism to Expectations) 2020-08-27T07:52:08.121Z
Proofs Section 2.3 (Updates, Decision Theory) 2020-08-27T07:49:05.047Z
Introduction To The Infra-Bayesianism Sequence 2020-08-26T20:31:30.114Z
Counterfactual Induction (Lemma 4) 2019-12-17T05:05:15.959Z
Counterfactual Induction (Algorithm Sketch, Fixpoint proof) 2019-12-17T05:04:25.054Z
Counterfactual Induction 2019-12-17T05:03:32.401Z
CO2 Stripper Postmortem Thoughts 2019-11-30T21:20:33.685Z
A Brief Intro to Domain Theory 2019-11-21T03:24:13.416Z
So You Want to Colonize The Universe Part 5: The Actual Design 2019-02-27T10:23:28.424Z
So You Want to Colonize The Universe Part 4: Velocity Changes and Energy 2019-02-27T10:22:46.371Z
So You Want To Colonize The Universe Part 3: Dust 2019-02-27T10:20:14.780Z
So You Want to Colonize the Universe Part 2: Deep Time Engineering 2019-02-27T10:18:18.209Z
So You Want to Colonize The Universe 2019-02-27T10:17:50.427Z
Failures of UDT-AIXI, Part 1: Improper Randomizing 2019-01-06T03:53:03.563Z
COEDT Equilibria in Games 2018-12-06T18:00:08.442Z
Oracle Induction Proofs 2018-11-28T08:12:38.306Z
Bounded Oracle Induction 2018-11-28T08:11:28.183Z
What are Universal Inductors, Again? 2018-11-07T22:32:57.364Z
When EDT=CDT, ADT Does Well 2018-10-25T05:03:40.366Z
Asymptotic Decision Theory (Improved Writeup) 2018-09-27T05:17:03.222Z
Reflective AIXI and Anthropics 2018-09-24T02:15:18.108Z

Comments

Comment by Diffractor on UDT1.01: The Story So Far (1/10) · 2024-03-28T00:48:13.477Z · LW · GW

That original post lays out UDT1.0, I don't see anything about precomputing the optimal policy within it. The UDT1.1 fix of optimizing the global policy instead of figuring out the best thing to do on the fly, was first presented here, note that the 1.1 post that I linked came chronologically after the post you linked.

Comment by Diffractor on Unifying Bargaining Notions (2/2) · 2023-03-26T06:12:17.980Z · LW · GW

I'd strongly agree with that, mainly because, while points with this property exist, they are not necessarily unique. The non-uniqueness is a big issue.

Comment by Diffractor on Unifying Bargaining Notions (2/2) · 2023-03-26T06:09:14.466Z · LW · GW

It was a typo! And it has been fixed.

Comment by Diffractor on Unifying Bargaining Notions (1/2) · 2023-03-26T06:04:48.796Z · LW · GW

It is because beach/beach is the surplus-maximizing result. Any Pareto-optimal bargaining solution where money is involved will involve the surplus-maximizing result being played, and a side payment occuring.

Comment by Diffractor on Open Problem in Voting Theory · 2022-11-06T02:38:52.000Z · LW · GW

I have a reduction of this problem to a (hopefully) simpler problem. First up, establish the notation used.

[n] refers to the set  is the number of candidates. Use  as an abbreviation for the space , it's the space of probability distributions over the candidates. View  as embedded in , and set the origin at the center of .

At this point, we can note that we can biject the following: 
1: Functions of type  
2: Affine functions of type  
3: Functions of the form , where , and and , and everything's suitably set so that these functions are bounded in  over . (basically, we extend our affine function to the entire space with the Hahn-Banach theorem, and use that every affine function can be written as a linear function plus a constant) We can reexpress our distribution  over utility functions as a distribution over these normal vectors .

Now, we can reexpress the conjecture as follows. Is it the case that there exists a  s.t. for all , we have 

 Where  is the function that's -1 if the quantity is negative, 0 if 0, and 1 if the quantity is positive. To see the equivalence to the original formulation, we can rewrite things as 

 Where the bold 1 is an indicator function. And split up the expectation and realize that this is a probability, so we get 

 

 And this then rephrases as 

 Which was the original formulation of the problem.

Abbreviating the function  as , then a necessary condition to have a  that dominates everything is that 

 If you have this property, then you might not necessarily have an optimal  that dominates everything, but there are  that get a worst-case expectation arbitrarily close to 0. Namely, even if the worst possible  is selected, then the violation of the defining domination inequality happens with arbitrarily small magnitude. There might not be an optimal lottery-lottery, but there are lottery-lotteries arbitrarily close to optimal where this closeness-to-optimality is uniform over every foe. Which seems good enough to me. So I'll be focused on proving this slightly easier statement and glossing over the subtle distinction between that, and the existence of truly optimal lottery-lotteries.

As it turns out, this slightly easier statement (that sup inf is 0 or higher) can be outright proven assuming the following conjecture.

Stably-Good-Response Conjecture: For every , and , there exists a  and a  s.t. 

Pretty much, for any desired level of suckage and any foe , there's a probability distribution  you can pick which isn't just a good response (this always exists, just pick  itself), but a stably good response, in the sense that there's some nonzero level of perturbation to the foe where  remains a good response no matter how the foe is perturbed.

Theorem 1 Assuming the Stably-Good-Response Conjecture, .

I'll derive a contradiction from the assumption that . Accordingly, assume the strict inequality.

In such a case, there is some  s.t. . Let the set . Now, every  lies in the interior of  for some , by the Stably-Good-Response Conjecture. Since  is a compact set, we can isolate a finite subcover and get some finite set  of probability distributions  s.t. .

Now, let the set . Since , this family of sets manages to cover all of  (convex hull of our finite set.) Further, for any fixed  is a continuous function  (a bit nonobvious, but true nontheless because there's only finitely many vertices to worry about). Due to continuity, all the sets  will be open. Since we have an open cover of , which is a finite simplex (and thus compact), we can isolate a finite subcover, to get a finite set  of  s.t. . And now we can go 

 The first strict inequality was from how all  had some  which made  get a bad score. The  was from expanding the set of options. The  was from how  is a continuous linear function when restricted to , both of which are compact convex sets, so the minimax theorem can be applied. Then the next  was from restricting the set of options, and the  was from how every  had some  that'd make  get a good score, by construction of  (and compactness to make the inequality a strict one).

But wait, we just showed , that's a contradiction. Therefore, our original assumption must have been wrong. Said original assumption was that , so negating it, we've proved that 

 As desired.

Comment by Diffractor on Threat-Resistant Bargaining Megapost: Introducing the ROSE Value · 2022-10-01T16:28:28.958Z · LW · GW

With this.

Comment by Diffractor on Threat-Resistant Bargaining Megapost: Introducing the ROSE Value · 2022-10-01T02:24:27.922Z · LW · GW

What I mean is, the players hit each other up, are like "yo, let's decide on which ROSE point in the object-level game we're heading towards"
Of course, they don't manage to settle on what equilibrium to go for in the resulting bargaining game, because, again, multiple ROSE points might show up in the bargaining game. 
But, the ROSE points in the bargaining game are in a restricted enough zone (what with that whole "must be better than the random-dictator point" thing) to seriously constrain the possibilities in the object-level game. "Worst ROSE point for Alice in the object-level-game" is a whole lot worse for her than "Worst ROSE point for Alice in the bargaining game about what to do in the object-level-game".

So, the players should be able to ratchet up their disagreement point and go "even if the next round of bargaining fails, at least we can promise that everyone does this well, right? Sure, everyone's going for their own idea of fairness, but even if Alice ends up with her worst ROSE point in the bargaining game, her utility is going to be at least this high, and similar for everyone else."

And so, each step of bargaining that successfully happens ratchets up the disagreement point closer to the Pareto frontier, in a way that should quickly converge. If someone disagrees on step 3, then the step-3 disagreement point gets played, which isn't that short of the Pareto frontier. And if someone doesn't have time for all this bargaining, they can just break things off at step 10 or something, that's just about as good as going all the way to infinity.

Or at least, it should work like this. I haven't proved that it does, and it depends on things like "what does ROSE bargaining look like for n players" and "does the random-dictator-point-dominance thing still hold in the n-player case" and "what's the analogue of that strategy where you block your foe from getting more than X utility, when there are multiple foes?". But this disagreement-point ratcheting is a strategy that address your worries with "ever-smaller pieces of the problem live on higher meta-levels, so the process of adding layers of meta actually converges to solving the problem, and breaking it off early solves most of the problem"

Regarding your last comment, yes, you could always just have a foe that's a jerk, but you can at least act so they don't gain from being an jerk, in a way robust against you and a foe having slightly different definitions of "jerk".

Comment by Diffractor on Threat-Resistant Bargaining Megapost: Introducing the ROSE Value · 2022-09-30T03:50:25.357Z · LW · GW

I think I have a contender for something which evades the conditional-threat issue stated at the end, as well as obvious variants and strengthenings of it, and which would be threat-resistant in a dramatically stronger sense than ROSE.

There's still a lot of things to check about it that I haven't done yet. And I'm unsure how to generalize to the n-player case. And it still feels unpleasantly hacky, according to my mathematical taste.

But the task at least feels possible, now.

EDIT: it turns out it was still susceptible to the conditional-threat issue, but then I thought for a while and came up with a different contender that feels a lot less hacky, and that provably evades the conditional-threat issue. Still lots of work left to be done on it, though.

Comment by Diffractor on Threat-Resistant Bargaining Megapost: Introducing the ROSE Value · 2022-09-28T21:08:55.854Z · LW · GW

For 1, it's just intrinsically mathematically appealing (continuity is always really nice when you can get it), and also because of an intution that if your foe experiences a tiny preference perturbation, you should be able to use small conditional payments to replicate their original preferences/incentive structure and start negotiating with that, instead.

I should also note that nowhere in the visual proof of the ROSE value for the toy case, is continuity used. Continuity just happens to appear.

For 2, yes, it's part of game setup. The buttons are of whatever intensity you want (but they have to be intensity-capped somewhere for technical reasons regarding compactness). Looking at the setup, for each player pair i,j,  is the cap for how much of j's utility that i can destroy. These can vary, as long as they're nonnegative and not infinite. From this, it's clear "Alice has a powerful button, Bob has a weak one" is one of the possibilities, that would just mean . There isn't an assumption that everyone has an equally powerful button, because then you could argue that everyone just has an equal strength threat and then it wouldn't be much of a threat-resistance desideratum, now would it? Heck, you can even give one player a powerful button and the other a zero-strength button that has no effect, that fits in the formalism.

So the theorem is actually saying "for all members of the family of destruction games with the button caps set wherever the heck you want, the payoffs are the same as the original game".

Comment by Diffractor on Threat-Resistant Bargaining Megapost: Introducing the ROSE Value · 2022-09-28T06:49:13.713Z · LW · GW

My preferred way of resolving it is treating the process of "arguing over which equilibrium to move to" as a bargaining game, and just find a ROSE point from that bargaining game. If there's multiple ROSE points, well, fire up another round of bargaining. This repeated process should very rapidly have the disagreement points close in on the Pareto frontier, until everyone is just arguing over very tiny slices of utility.

This is imperfectly specified, though, because I'm not entirely sure what the disagreement points would be, because I'm not sure how the "don't let foes get more than what you think is fair" strategy generalizes to >2 players. Maaaybe disagreement-point-invariance comes in clutch here? If everyone agrees that an outcome as bad or worse than their least-preferred ROSE point would happen if they disagreed, then disagreement-point-invariance should come in to have everyone agree that it doesn't really matter exactly where that disagreement point is.

Or maybe there's some nice principled property that some equilibria have, which others don't, that lets us winnow down the field of equilibria somewhat. Maybe that could happen.

I'm still pretty unsure, but "iterate the bargaining process to argue over which equilibria to go to, you don't get an infinite regress because you rapidly home in on the Pareto frontier with each extra round you add" is my best bad idea for it.

EDIT: John Harsanyi had the same idea. He apparently had some example where there were multiple CoCo equilibria and his suggestion was that a second round of bargaining could be initiated over which equilibria to pick, but that in general, it'd be so hard to compute the n-person Pareto frontier for large n, that an equilibria might be stable because nobody can find a different equilibria nearby to aim for.

So this problem isn't unique to ROSE points in full generality (CoCo equilibria have the exact same issue), it's just that ROSE is the only one that produces multiple solutions for bargaining games, while CoCo only returns a single solution for bargaining games. (bargaining games are a subset of games in general)

Comment by Diffractor on Unifying Bargaining Notions (1/2) · 2022-09-17T21:30:32.525Z · LW · GW

So, if you are limited to only pure strategies, for some reason, then yes, Chinese would be on the Pareto frontier.
But if you can implement randomization, then Chinese is not on the Pareto frontier, because both sides agree that "flip a coin, Heads for Sushi, Tails for Italian" is just strictly better than Chinese.

The convex shape consists of all the payoff pairs you can get if you allow randomization.

Comment by Diffractor on What's the longest a sentient observer could survive in the Dark Era? · 2022-09-16T05:09:05.028Z · LW · GW

Alright, this is kind of a Special Interest, so here's your relevant thought dump.

First up, the image is kind of misleading, in the sense that you can always tack on extra orders of magnitude. You could tack on another thousand orders of magnitude and make it look even longer, or just go "this is 900 OOM's of literally nothing happening, lets clip that off and focus on the interesting part"

Assuming proton decay is a thing (that free protons decay with a ridiculously long half-life)....

ok, I was planning on going "as a ludicrous upper bound, here's the number", but, uh, the completely ludicrous upper bound wound up being a WHOLE LOT longer than I thought. I... I didn't even think it was possible to stall till the evaporation of even a small black hole. But this calculation indicates that if you're aiming solely at living ludicrously long, you can stall about a googol years, enough for even the largest black holes to evaporate, and to get to the end of the black hole era. I'm gonna need to rethink some stuff.

EDIT: rethought some stuff, realized it doesn't change my conclusions from when I last looked into this. The fundamental problem is that, for any remotely realistic numbers, if you're trying to catch the final evaporation of a black hole to harvest its mass-energy, you'll blow a lot more than the amount of mass-energy that you could gain, in order to wait that long.

Final conclusion: If proton decay is a thing, it's definitely not worth waiting to the end of a black hole, you'll want to have things wrapped up far earlier. If proton decay isn't a thing, you'll want to wait till the black hole evaporates to catch that final party and last  kg of mass-energy. If proton decay is a thing and you're willing to blow completely ridiculous cosmic amounts of resources on it, you can last till the late parts of the black hole era.

The rough rationale is as follows. Start with 10x the mass of the largest black holes in the universe, around  solar masses stockpiled. If they're spinning fast enough, you can extract energy from them, assume you can extract all of it (it's over 10 percent, so let's round it up to 100 percent). Assume that proton decay is  years (a high estimate), and that we use the energy at 100 percent efficiency to make matter (also high estimate), you can take out one proton, wait for around a proton decay time, take out the next proton, and so on. Then you can take out around  protons, and each one lasts you around  years, getting around  years (high uncertainty). And, coincidentally, natural Hawing radiation finishes off the black hole of that size in  years, leaving a small margin left over for silly considerations like "maybe the intelligence needs more than one proton to physically implement".

So, not remotely practical, but maybe something like  years would actually be doable? That extra 29 OOM's of wiggle room patches over a lot of sins in this calculation.

But, in terms of what would actually be practical for the far future of humanity, it'd be the strat of "dump as much mass into a fast-spinning black hole as possible. Like, eat the entire Laniakea supercluster complex. Wait a trillion years for the cosmic microwave background radiation to cool to its floor temperature. You'd be in the late Stelliferous era at this point, with a few red dwarfs around, if you didn't dump all the stars in the mega-black-hole already. Set up some infrastructure around the mega-hole, and use the Blandford-Znajek mechanism to convert the mega-hole spin into electrical power. You should be able to get about a gigawatt of power for the next  years to run a whole lotta computation and a little bit of maintanence, and if proton decay is messing with things, chop however many OOM's you need off the time and add those OOM's to the power output. Party for a trillion trillion trillion eons with your super-optimized low-temperature computing infrastructure"

Comment by Diffractor on Unifying Bargaining Notions (1/2) · 2022-08-20T03:21:27.899Z · LW · GW

Yeah, "transferrable utility games" are those where there is a resource, and the utilities of all players are linear in that resource (in order to redenominate everyone's utilities as being denominated in that resource modulo a shift factor). I believe the post mentioned this.

Comment by Diffractor on Unifying Bargaining Notions (2/2) · 2022-07-27T20:13:24.898Z · LW · GW

Task completed.

Comment by Diffractor on Unifying Bargaining Notions (1/2) · 2022-07-27T20:07:34.892Z · LW · GW

Agreed. The bargaining solution for the entire game can be very different from adding up the bargaining solutions for the subgames. If there's a subgame where Alice cares very much about victory in that subgame (interior decorating choices) and Bob doesn't care much, and another subgame where Bob cares very much about it (food choice) and Alice doesn't care much, then the bargaining solution of the entire relationship game will end up being something like "Alice and Bob get some relative weights on how important their preferences are, and in all the subgames, the weighted sum of their utilities is maximized. Thus, Alice will be given Alice-favoring outcomes in the subgames where she cares the most about winning, and Bob will be given Bob-favoring outcomes in the subgames where he cares the most about winning"

And in particular, since it's a sequential game, Alice can notice if Bob isn't being fair, and enforce the bargaining solution by going "if you're not aiming for something sorta like this, I'll break off the relationship". So, from Bob's point of view, aiming for any outcome that's too Bob-favoring has really low utility since Alice will inevitably catch on. (this is the time-extended version of "give up on achieving any outcome that drives the opponent below their BATNA") Basically, in terms of raw utility, it's still a bargaining game deep down, but once both sides take into account how the other will react, the payoff matrix for the restaurant game (taking the future interactions into account) will look like "it's a really bad idea to aim for an outcome the other party would regard as unfair"

Comment by Diffractor on Unifying Bargaining Notions (1/2) · 2022-07-27T19:42:14.727Z · LW · GW

Actually, they apply anyways in all circumstances, not just after the rescaling and shifting is done! Scale-and-shift invariance means that no matter how you stretch and shift the two axes, the bargaining solution always hits the same probability-distribution over outcomes,  so monotonicity means "if you increase the payoff numbers you assign for some or all of the outcomes, the Pareto frontier point you hit will give you an increased number for your utility score over what it'd be otherwise" (no matter how you scale-and-shift). And independence of irrelevant alternatives says "you can remove any option that you have 0 probability of taking and you'll still get the same probability-distribution over outcomes as you would in the original game" (no matter how you scale-and-shift)

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2022-06-30T03:24:51.340Z · LW · GW

If you're looking for curriculum materials, I believe that the most useful reference would probably be my "Infra-exercises", a sequence of posts containing all the math exercises you need to reinvent a good chunk of the theory yourself. Basically, it's the textbook's exercise section, and working through interesting math problems and proofs on one's own has a much better learning feedback loop and retention of material than slogging through the old posts. The exercises are short on motivation and philosophy compared to the posts, though, much like how a functional analysis textbook takes for granted that you want to learn functional analysis and doesn't bother motivating it.

The primary problem is that the exercises aren't particularly calibrated in terms of difficulty, and in order for me to get useful feedback, someone has to actually work through all of them, so feedback has been a bit sparse. So I'm stuck in a situation where I keep having to link everyone to the infra-exercises over and over and it'd be really good to just get them out and publicly available, but if they're as important as I think, then the best move is something like "release them one at a time and have a bunch of people work through them as a group" like the fixpoint exercises, instead of "just dump them all as public documents".

I'll ask around about speeding up the public - ation of the exercises and see what can be done there.

I'd strongly endorse linking this introduction even if the exercises are linked as well, because this introduction serves as the table of contents to all the other applicable posts.

Comment by Diffractor on Basic Inframeasure Theory · 2022-06-19T02:44:28.443Z · LW · GW

So, if you make Nirvana infinite utility, yes, the fairness criterion becomes "if you're mispredicted, you have any probability at all of entering the situation where you're mispredicted" instead of "have a significant probability of entering the situation where you're mispredicted", so a lot more decision-theory problems can be captured if you take Nirvana as infinite utility. But, I talk in another post in this sequence (I think it was "the many faces of infra-beliefs") about why you want to do Nirvana as 1 utility instead of infinite utility.

Parfit's Hitchiker with a perfect predictor is a perfectly fine acausal decision problem, we can still represent it, it just cannot be represented as an infra-POMDP/causal decision problem.

Yes, the fairness criterion is tightly linked to the pseudocausality condition. Basically, the acausal->pseudocausal translation is the part where the accuracy of the translation might break down, and once you've got something in pseudocausal form, translating it to causal form from there by adding in Nirvana won't change the utilities much.

Comment by Diffractor on Basic Inframeasure Theory · 2022-06-18T04:59:04.053Z · LW · GW

So, the flaw in your reasoning is after updating we're in the city,  doesn't go "logically impossible, infinite utility". We just go "alright, off-history measure gets converted to 0 utility", a perfectly standard update. So  updates to (0,0) (ie, there's 0 probability I'm in this situation in the first place, and my expected utility for not getting into this situation in the first place is 0, because of probably dying in the desert)

As for the proper way to do this analysis, it's a bit finicky. There's something called "acausal form", which is the fully general way of representing decision-theory problems. Basically, you just give an infrakernel  that tells you your uncertainty over which history will result, for each of your policies.

So, you'd have 


Ie, if you pay, 99 percent chance of ending up alive but paying and 1 percent chance of dying in the desert, if you don't pay, 99 percent chance of dying in the desert and 1 percent chance of cheating them, no extra utility juice on either one.

You update on the event "I'm alive". The off-event utility function is like "being dead would suck, 0 utility". So, your infrakernel updates to (leaving off the scale-and-shift factors, which doesn't affect anything)


Because, the probability mass on "die in desert" got burned and turned into utility juice, 0 of it since it's the worst thing. Let's say your utility function assigns 0.5 utility to being alive and rich, and 0.4 utility to being alive and poor. So the utility of the first policy is , and the utility of the second policy is , so it returns the same answer of paying up. It's basically thinking "if I don't pay, I'm probably not in this situation in the first place, and the utility of "I'm not in this situation in the first place" is also about as low as possible."

BUT

There's a very mathematically natural way to translate any decision theory to "causal form", and as it turns out, the process which falls directly out of the math is that thing where you go "hard-code in all possible policies, go to Nirvana if I behave differently from the hard-coded policy". This has an advantage and a disadvantage. The advantage is that now your decision-theory problem is in the form of an infra-POMDP, a much more restrictive form, so you've got a much better shot at actually developing a practical algorithm for it. The disadvantage is that not all decision-theory problems survive the translation process unchanged. Speaking informally the "fairness criterion" to translate a decision-theory problem into causal form without too much loss in fidelity is something like "if I was mispredicted, would I actually have a good shot at entering the situation where I was mispredicted to prove the prediction wrong".

Counterfactual mugging fits this. If Omega flubs its prediction, you've got a 50 percent chance of being able to prove it wrong.
XOR blackmail fits this. If the blackmailer flubs its prediction and thinks you'll pay up, you've got like a 90 percent chance of being able to prove it wrong.
Newcomb's problem fits this. If Omega flubs its prediction and thinks you'll 2-box, you'll definitely be able to prove it wrong.

Transparent Newcomb and Parfait's Hitchiker don't fit this "fairness property" (especially for 100 percent accuracy), and so when you translate them to a causal problem, it ruins things. If the predictor screws up and thinks you'll 2-box on seeing a filled transparent box/won't pay up on seeing you got saved, then the transparent box is empty/you die in the desert, and you don't have a significant shot at proving them wrong.

Let's see what's going wrong. Our two a-environments are



Update on the event "I didn't die in the desert". Then, neglecting scale-and-shift, our two a-environments are



Letting N be the utility of Nirvana,
If you pay up, then the expected utilities of these are  and 
If you don't pay up, then the expected utilities of these are  and 

Now, if N is something big like 100, then the worst-case utilities of the policies are 0.396 vs 0.005, as expected, and you pay up. But if N is something like 1, then the worst-case utilities of the policies are 0.01 vs 0.005, which... well, it technicallygets the right answer, but those numbers are suspiciously close to each other, the agent isn't thinking properly. And so, without too much extra effort tweaking the problem setup, it's possible to generate decision-theory problems where the agent just straight-up makes the wrong decision after changing things to the causal setting.
 

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2022-02-02T08:08:23.474Z · LW · GW

Omega and hypercomputational powers isn't needed, just decent enough prediction about what someone would do. I've seen Transparent Newcomb being run on someone before, at a math camp. They were predicted to not take the small extra payoff, and they didn't. And there was also an instance of acausal vote trading that I managed to pull off a few years ago, and I've put someone in a counterfactual mugging sort of scenario where I did pay out due to predicting they'd take the small loss in a nearby possible world. 2/3 of those instances were cases where I was specifically picking people that seemed unusually likely to take this sort of thing seriously, and it was predictable what they'd do.

I guess you figure out the entity is telling the truth in roughly the same way you'd figure out a human is telling the truth? Like "they did this a lot against other humans and their prediction record is accurate".

And no, I don't think that you'd be able to get from this mathematical framework to proving "a proof of benevolence is impossible". What the heck would that proof even look like?

Comment by Diffractor on Passing Troll Bridge · 2022-01-29T09:22:41.295Z · LW · GW

The key piece that makes any Lobian proof tick is the "proof of X implies X" part. For Troll Bridge, X is "crossing implies bridge explodes".

For standard logical inductors, that Lobian implication holds because, if a proof of X showed up, every trader betting in favor of X would get free money, so there could be a trader that just names a really really big bet in favor of the X (it's proved, after all), the agent ends up believing X, and so doesn't cross, and so crossing implies bridge explodes.

For this particular variant of a logical inductor, there's an upper limit on the number of bets a trader is able to make, and this can possibly render the statement "if a proof of X showed up, the agent would believe X" false. And so, the key piece of the Lobian proof fails, and the agent happily crosses the bridge with no issue, because it would disbelieve a proof of bridge explosion if it saw it (and so the proof does not show up in the first place).

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-12-20T19:20:23.176Z · LW · GW

Said actions or lack thereof cause a fairly low utility differential compared to the actions in other, non-doomy hypotheses. Also I want to draw a critical distinction between "full knightian uncertainty over meteor presence or absence", where your analysis is correct, and "ordinary probabilistic uncertainty between a high-knightian-uncertainty hypotheses, and a low-knightian uncertainty one that says the meteor almost certainly won't happen" (where the meteor hypothesis will be ignored unless there's a meteor-inspired modification to what you do that's also very cheap in the "ordinary uncertainty" world, like calling your parents, because the meteor hypothesis is suppressed in decision-making by the low expected utility differentials, and we're maximin-ing expected utility)

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-12-20T02:37:15.195Z · LW · GW

Something analogous to what you are suggesting occurs. Specifically, let's say you assign 95% probability to the bandit game behaving as normal, and 5% to "oh no, anything could happen, including the meteor". As it turns out, this behaves similarly to the ordinary bandit game being guaranteed, as the "maybe meteor" hypothesis assigns all your possible actions a score of "you're dead" so it drops out of consideration.

The important aspect which a hypothesis needs, in order for you to ignore it, is that no matter what you do you get the same outcome, whether it be good or bad. A "meteor of bliss hits the earth and everything is awesome forever" hypothesis would also drop out of consideration because it doesn't really matter what you do in that scenario.

To be a wee bit more mathy, probabilistic mix of inframeasures works like this. We've got a probability distribution , and a bunch of hypotheses , things that take functions as input, and return expectation values. So, your prior, your probabilistic mixture of hypotheses according to your probability distribution, would be the function

It gets very slightly more complicated when you're dealing with environments, instead of static probability distributions, but it's basically the same thing. And so, if you vary your actions/vary your choice of function f, and one of the hypotheses is assigning all these functions/choices of actions the same expectation value, then it can be ignored completely when you're trying to figure out the best function/choice of actions to plug in.

So, hypotheses that are like "you're doomed no matter what you do" drop out of consideration, an infra-Bayes agent will always focus on the remaining hypotheses that say that what it does matters.

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-12-19T03:49:11.939Z · LW · GW

Well, taking worst-case uncertainty is what infradistributions do. Did you have anything in mind that can be done with Knightian uncertainty besides taking the worst-case (or best-case)?

And if you were dealing with best-case uncertainty instead, then the corresponding analogue would be assuming that you go to hell if you're mispredicted (and then, since best-case things happen to you, the predictor must accurately predict you).

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-12-18T08:00:02.076Z · LW · GW

This post is still endorsed, it still feels like a continually fruitful line of research. A notable aspect of it is that, as time goes on, I keep finding more connections and crisper ways of viewing things which means that for many of the further linked posts about inframeasure theory, I think I could explain them from scratch better than the existing work does. One striking example is that the "Nirvana trick" stated in this intro (to encode nonstandard decision-theory problems), has transitioned from "weird hack that happens to work" to "pops straight out when you make all the math as elegant as possible". Accordingly, I'm working on a "living textbook" (like a textbook, but continually being updated with whatever cool new things we find) where I try to explain everything from scratch in the crispest way possible, to quickly catch up on the frontier of what we're working on. That's my current project.

I still do think that this is a large and tractable vein of research to work on, and the conclusion hasn't changed much.

Comment by Diffractor on Solve Corrigibility Week · 2021-11-29T03:10:31.035Z · LW · GW

Availability: Almost all times between 10 AM and PM, California time, regardless of day. Highly flexible hours. Text over voice is preferred, I'm easiest to reach on Discord. The LW Walled Garden can also be nice.

Comment by Diffractor on Countably Factored Spaces · 2021-11-08T00:50:01.025Z · LW · GW

Amendment made.

Comment by Diffractor on Troll Bridge · 2021-10-14T06:19:58.270Z · LW · GW

A note to clarify for confused readers of the proof. We started out by assuming , and . We conclude  by how the agent works. But the step from there to  (ie, inconsistency of PA) isn't entirely spelled out in this post.

Pretty much, that follows from a proof by contradiction. Assume con(PA) ie , and it happens to be a con(PA) theorem that the agent can't prove in advance what it will do, ie, . (I can spell this out in more detail if anyone wants) However, combining  and  (or the other option) gets you , which, along with , gets you . So PA isn't consistent, ie, .

Comment by Diffractor on Finite Factored Sets: Polynomials and Probability · 2021-09-05T19:42:32.477Z · LW · GW

In the proof of Lemma 3, it should be 

"Finally, since , we have that .

Thus,  and  are both equal to .

instead.

Comment by Diffractor on Yet More Modal Combat · 2021-08-24T19:04:31.359Z · LW · GW

Any idea of how well this would generalize to stuff like Chicken or games with more than 2-players, 2-moves?

Comment by Diffractor on Treatments for depression that depressed LW readers may not have tried? · 2021-08-22T03:38:57.477Z · LW · GW

I was subclinically depressed, acquired some bupropion from Canada, and it's been extremely worthwhile.

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-05-12T23:08:25.013Z · LW · GW

I don't know, we're hunting for it, relaxations of dynamic consistency would be extremely interesting if found, and I'll let you know if we turn up with anything nifty.

Comment by Diffractor on The Many Faces of Infra-Beliefs · 2021-04-28T05:23:28.263Z · LW · GW

Looks good. 

Re: the dispute over normal bayesianism: For me, "environment" denotes "thingy that can freely interact with any policy in order to produce a probability distribution over histories". This is a different type signature than a probability distribution over histories, which doesn't have a degree of freedom corresponding to which policy you pick.

But for infra-bayes, we can associate a classical environment with the set of probability distributions over histories (for various possible choices of policy), and then the two distinct notions become the same sort of thing (set of probability distributions over histories, some of which can be made to be inconsistent by how you act), so you can compare them.

Comment by Diffractor on The Many Faces of Infra-Beliefs · 2021-04-27T07:38:16.770Z · LW · GW

I'd say this is mostly accurate, but I'd amend number 3. There's still a sort of non-causal influence going on in pseudocausal problems, you can easily formalize counterfactual mugging and XOR blackmail as pseudocausal problems (you need acausal specifically for transparent newcomb, not vanilla newcomb). But it's specifically a sort of influence that's like "reality will adjust itself so contradictions don't happen, and there may be correlations between what happened in the past, or other branches, and what your action is now, so you can exploit this by acting to make bad outcomes inconsistent". It's purely action-based, in a way that manages to capture some but not all weird decision-theoretic scenarios.

In normal bayesianism, you do not have a pseudocausal-causal equivalence. Every ordinary environment is straight-up causal.

Comment by Diffractor on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-04-09T20:39:38.185Z · LW · GW

Re point 1, 2: Check this out. For the specific case of 0 to even bits, ??? to odd bits, I think solomonoff can probably get that, but not more general relations.

Re: point 3, Solomonoff is about stochastic environments that just take your action as an input, and aren't reading your policy. For infra-Bayes, you can deal with policy-dependent environments without issue, as you can consider hard-coding in every possible policy to get a family of stochastic environments, and UDT behavior naturally falls out as a result from this encoding. There's still some open work to be done on which sorts of policy-dependent environments like this are learnable (inferrable from observations), but it's pretty straightforward to cram all sorts of weird decision-theory scenarios in as infra-Bayes hypothesis, and do the right thing in them.

Comment by Diffractor on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-04-09T19:33:24.063Z · LW · GW

Ah. So, low expected utility alone isn't too much of a problem. The amount of weight a hypothesis has in a prior after updating depends on the gap between the best-case values and worst-case values. Ie, "how much does it matter what happens here". So, the stuff that withers in the prior as you update are the hypotheses that are like "what happens now has negligible impact on improving the worst-case". So, hypotheses that are like "you are screwed no matter what" just drop out completely, as if it doesn't matter what you do, you might as well pick actions that optimize the other hypotheses that aren't quite so despondent about the world.

In particular, if all the probability distributions in a set are like "this thing that just happened was improbable", the hypothesis takes a big hit in the posterior, as all the a-measures are like "ok, we're in a low-measure situation now, what happens after this point has negligible impact on utility". 

I still need to better understand how updating affects hypotheses which are a big set of probability distributions so there's always one probability distribution that's like "I correctly called it!".

The motivations for different g are: 

If g is your actual utility function, then updating with g as your off-event utility function grants you dynamic consistency. Past-you never regrets turning over the reins to future you, and you act just as UDT would.

If g is the constant-1 function, then that corresponds to updates where you don't care at all what happens off-history (the closest thing to normal updates), and both the "diagonalize against knowing your own action" behavior in decision theory and the Nirvana trick pops out for free from using this update.

Comment by Diffractor on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-04-09T19:24:50.920Z · LW · GW

"mixture of infradistributions" is just an infradistribution, much like how a mixture of probability distributions is a probability distribution.

Let's say we've got a prior , a probability distribution over indexed hypotheses.

If you're working in a vector space, you can take any countable collection of sets in said vector space, and mix them together according to a prior  giving a weight to each set. Just make the set of all points which can be made by the process "pick a point from each set, and mix the points together according to the probability distribution "

For infradistributions as sets of probability distributions or a-measures or whatever, that's a subset of a vector space. So you have a bunch of sets , and you just mix the sets together according to , that gives you your set .

If you want to think about the mixture in the concave functional view, it's even nicer. You have a bunch of  which are "hypothesis i can take a function and output what its worst-case expectation value is". The mixture of these, , is simply defined as . This is just mixing the functions together!

Both of these ways of thinking of mixtures of infradistributions are equivalent, and recover mixture of probability distributions as a special case.

Comment by Diffractor on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-04-09T19:06:16.952Z · LW · GW

The concave functional view is "the thing you do with a probability distribution is take expectations of functions with it. In fact, it's actually possible to identify a probability distribution with the function  mapping a function to its expectation. Similarly, the thing we do with an infradistribution is taking expectations of functions with it. Let's just look at the behavior of the function  we get, and neglect the view of everything as a set of a-measures."

As it turns out, this view makes proofs a whole lot cleaner and tidier, and you only need a few conditions on a function like that for it to have a corresponding set of a-measures.

Comment by Diffractor on Stuart_Armstrong's Shortform · 2021-04-04T01:04:26.818Z · LW · GW

Sounds like a special case of crisp infradistributions (ie, all partial probability distributions have a unique associated crisp infradistribution)

Given some , we can consider the (nonempty) set of probability distributions equal to  where  is defined. This set is convex (clearly, a mixture of two probability distributions which agree with  about the probability of an event will also agree with  about the probability of an event).

Convex (compact) sets of probability distributions = crisp infradistributions.

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-03-31T23:10:40.408Z · LW · GW

You're completely right that hypotheses with unconstrained Murphy get ignored because you're doomed no matter what you do, so you might as well optimize for just the other hypotheses where what you do matters. Your "-1,000,000 vs -999,999 is the same sort of problem as 0 vs 1" reasoning is good.

Again, you are making the serious mistake of trying to think about Murphy verbally, rather than thinking of Murphy as the personification of the "inf" part of the  definition of expected value, and writing actual equations.  is the available set of possibilities for a hypothesis. If you really want to, you can think of this as constraints on Murphy, and Murphy picking from available options, but it's highly encouraged to just work with the math.

For mixing hypotheses (several different  sets of possibilities) according to a prior distribution , you can write it as an expectation functional via  (mix the expectation functionals of the component hypotheses according to your prior on hypotheses), or as a set via  (the available possibilities for the mix of hypotheses are all of the form "pick a possibility from each hypothesis, mix them together according to your prior on hypotheses")

This is what I meant by "a constraint on Murphy is picked according to this probability distribution/prior, then Murphy chooses from the available options of the hypothesis they picked", that  set (your mixture of hypotheses according to a prior) corresponds to selecting one of the  sets according to your prior , and then Murphy picking freely from the set .


Using  (and considering our choice of what to do affecting the choice of , we're trying to pick the best function ) we can see that if the prior is composed of a bunch of "do this sequence of actions or bad things happen" hypotheses, the details of what you do sensitively depend on the probability distribution over hypotheses. Just like with AIXI, really.
Informal proof: if  and  (assuming ), then we can see that

and so, the best sequence of actions to do would be the one associated with the "you're doomed if you don't do blahblah action sequence" hypothesis with the highest prior. Much like AIXI does.


Using the same sort of thing, we can also see that if there's a maximally adversarial hypothesis in there somewhere that's just like "you get 0 reward, screw you" no matter what you do (let's say this is psi_0), then we have

And so, that hypothesis drops out of the process of calculating the expected value, for all possible functions/actions. Just do a scale-and-shift, and you might as well be dealing with the prior , which a-priori assumes you aren't in the "screw you, you lose" environment.


Hm, what about if you've just got two hypotheses, one where you're like "my knightian uncertainty scales with the amount of energy in the universe so if there's lots of energy available, things could e really bad, while if there's little energy available, Murphy can't make things bad" () and one where reality behaves pretty much as you'd expect it to(? And your two possible options would be "burn energy freely so Murphy can't use it" (the choice , attaining a worst-case expected utility of  in  and  in ), and "just try to make things good and don't worry about the environment being adversarial" (the choice , attaining 0 utility in , 1 utility in ).

The expected utility of  (burn energy) would be 
And the expected utility of (act normally) would be 

So "act normally" wins if , which can be rearranged as . Ie, you'll act normally if the probability of "things are normal" times the loss from burning energy when things are normal exceeds the probability of "Murphy's malice scales with amount of available energy" times the gain from burning energy in that universe.
So, assuming you assign a high enough probability to "things are normal" in your prior, you'll just act normally. Or, making the simplifying assumption that "burn energy" has similar expected utilities in both cases (ie, ), then it would come down to questions like "is the utility of burning energy closer to the worst-case where Murphy has free reign, or the best-case where I can freely optimize?"
And this is assuming there's just two options, the actual strategy selected would probably be something like "act normally, if it looks like things are going to shit, start burning energy so it can't be used to optimize against me"

Note that, in particular, the hypothesis where the level of attainable badness scales with available energy is very different from the "screw you, you lose" hypothesis, since there are actions you can take that do better and worse in the "level of attainable badness scales with energy in the universe" hypothesis, while the "screw you, you lose" hypothesis just makes you lose. And both of these are very different from a "you lose if you don't take this exact sequence of actions" hypothesis. 

Murphy is not a physical being, it's a personification of an equation, thinking verbally about an actual Murphy doesn't help because you start confusing very different hypotheses, think purely about what the actual set of probability distributions  corresponding to hypothesis  looks like. I can't stress this enough.

Also, remember, the goal is to maximize worst-case expected value, not worst-case value.

 

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-03-25T02:54:33.564Z · LW · GW

There's actually an upcoming post going into more detail on what the deal is with pseudocausal and acausal belief functions, among several other things, I can send you a draft if you want. "Belief Functions and Decision Theory" is a post that hasn't held up nearly as well to time as "Basic Inframeasure Theory".

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-03-24T19:17:02.796Z · LW · GW

If you use the Anti-Nirvana trick, your agent just goes "nothing matters at all, the foe will mispredict and I'll get -infinity reward" and rolls over and cries since all policies are optimal. Don't do that one, it's a bad idea.

For the concave expectation functionals: Well, there's another constraint or two, like monotonicity, but yeah, LF duality basically says that you can turn any (monotone) concave expectation functional into an inframeasure. Ie, all risk aversion can be interpreted as having radical uncertainty over some aspects of how the environment works and assuming you get worst-case outcomes from the parts you can't predict.

For your concrete example, that's why you have multiple hypotheses that are learnable. Sure, one of your hypotheses might have complete knightian uncertainty over the odd bits, but another hypothesis might not. Betting on the odd bits is advised by a more-informative hypothesis, for sufficiently good bets. And the policy selected by the agent would probably be something like "bet on the odd bits occasionally, and if I keep losing those bets, stop betting", as this wins in the hypothesis where some of the odd bits are predictable, and doesn't lose too much in the hypothesis where the odd bits are completely unpredictable and out to make you lose.

Comment by Diffractor on Introduction To The Infra-Bayesianism Sequence · 2021-03-22T05:33:01.912Z · LW · GW

Maximin, actually. You're maximizing your worst-case result.

It's probably worth mentioning that "Murphy" isn't an actual foe where it makes sense to talk about destroying resources lest Murphy use them, it's just a personification of the fact that we have a set of options, any of which could be picked, and we want to get the highest lower bound on utility we can for that set of options, so we assume we're playing against an adversary with perfectly opposite utility function for intuition. For that last paragraph, translating it back out from the "Murphy" talk, it's "wouldn't it be good to use resources in order to guard against worst-case outcomes within the available set of possibilities?" and this is just ordinary risk aversion.

For that equation , B can be any old set of probabilistic environments you want. You're not spending any resources or effort, a hypothesis just is a set of constraints/possibilities for what reality will do, a guess of the form "Murphy's operating under these constraints/must pick an option from this set."

You're completely right that for constraints like "environment must be a valid chess board", that's too loose of a constraint to produce interesting behavior, because Murphy is always capable of screwing you there.

This isn't too big of an issue in practice, because it's possible to mix together several infradistributions with a prior, which is like "a constraint on Murphy is picked according to this probability distribution/prior, then Murphy chooses from the available options of the hypothesis they picked". And as it turns out, you'll end up completely ignoring hypotheses where Murphy can screw you over no matter what you do. You'll choose your policy to do well in the hypotheses/scenarios where Murphy is more tightly constrained, and write the "you automatically lose" hypotheses off because it doesn't matter what you pick, you'll lose in those.

But there is a big unstudied problem of "what sorts of hypotheses are nicely behaved enough that you can converge to optimal behavior in them", that's on our agenda.

An example that might be an intuition pump, is that there's a very big difference between the hypothesis that is "Murphy can pick a coin of unknown bias at the start, and I have to win by predicting the coinflips accurately" and the hypothesis "Murphy can bias each coinflip individually, and I have to win by predicting the coinflips accurately". The important difference between those seems to be that past performance is indicative of future behavior in the first hypothesis and not in the second. For the first hypothesis, betting according to Laplace's law of succession would do well in the long run no matter what weighted coin Murphy picks, because you'll catch on pretty fast. For the second hypothesis, no strategy you can do can possibly help in that situation, because past performance isn't indicative of future behavior.

Comment by Diffractor on Dark Matters · 2021-03-17T21:46:49.097Z · LW · GW

I found this Quanta magazine article about it which seems to indicate that it fits the CMB spectrum well but required a fair deal of fiddling with gravity to do so, but I lamentably lack the physics capabilities to evaluate the original paper.

Comment by Diffractor on Dark Matters · 2021-03-15T12:03:14.546Z · LW · GW

If there's something wrong with some theory, isn't it quite odd that looking around at different parts of the universe seems to produce such a striking level of agreement on how much missing mass there is? If there was some out-of-left-field thing, I'd expect it to have confusing manifestations in many different areas and astronomers angsting about dramatically inconsistent measurements, I would not expect the CMB to end up explained away (and the error bars on those measurements are really really small) by the same 5:1 mix of non-baryonic matter vs baryonic matter the astronomers were postulating for everything else.

In other words, if you were starting out blind, the "something else will be found for a theory" bucket would not start out with most of its probability mass on "and in every respect, including the data that hasn't come in yet since it's the 1980's now, it's gonna look exactly like the invisible mass scenario". It's certainly not ruled out, but it has taken a bit of a beating.

Also, physics is not obligated to make things easy to find. Like how making a particle accelerator capable of reaching the GUT scale to test Grand Unified Theories takes a particle accelerator the size of a solar system.

Comment by Diffractor on Dark Matters · 2021-03-15T11:53:33.819Z · LW · GW

Yes, pink is gas and purple is mass, but also the gas there makes up the dominant component of the visible mass in the Bullet Cluster, far outweighing the stars.

Also, physicists have come up with a whole lot of possible candidates for dark matter particles. The supersymmetry-based ones took a decent kicking at the LHC, and I'm unsure of the motivations for some of the other ones, but the two that look most promising (to me, others may differ in opinion) are axions and sterile neutrinos, as those were conjectured to plug holes in the Standard Model, so they've got a stronger physics motivation than the rest. But again, it might be something no physicist saw coming.

For axions, there's something in particle physics called the strong CP problem, where there's no theoretical reason whatsoever why strong-force interactions shouldn't break CP symmetry. And yet, as far as we can tell, the CP-symmetry-breakingness of the strong-force interaction is precisely zero. Axions were postulated as a way to deal with this, and for certain mass ranges, they would work. They'd be extremely light particles.

And for sterile neutrinos, there's a weird thing we've noticed where all the other quarks and leptons can have left-handed or right-handed chirality, but neutrinos only come in the left-handed form, nobody's ever found a right-handed neutrino. Also, in the vanilla Standard Model, neutrinos are supposed to be massless. And as it turns out, if you introduce some right-handed neutrinos and do a bit of physics fiddling, something called the seesaw mechanism shows up, which has the two effects of making ordinary neutrinos very light (and they are indeed thousands of times lighter than any other elementary particle with mass), and the right-handed neutrinos very heavy (so it's hard to make them at a particle accelerator). Also, since the weak interaction (the major way we know neutrinos are a thing) is sensitive to chirality, the right-handed neutrinos don't really do much of anything besides have gravity and have slight interactions with neutrinos, with are already hard to detect. So that's another possibility.

Comment by Diffractor on Avoid Unnecessarily Political Examples · 2021-01-13T21:45:55.619Z · LW · GW

I'd go with number 2, because my snap reaction was "ooh, there's a "show personal blogposts" button?"

EDIT: Ok, I found the button. The problem with that button is that it looks identical to the other tags, and is at the right side of the screen when the structure of "Latest" draws your eyes to the left side of the screen. I'd make it a bit bigger and on the left side of the screen.

Comment by Diffractor on Belief Functions And Decision Theory · 2021-01-05T06:06:48.975Z · LW · GW

So, first off, I should probably say that a lot of the formalism overhead involved in this post in particular feels like the sort of thing that will get a whole lot more elegant as we work more things out, but "Basic inframeasure theory" still looks pretty good at this point and worth reading, and the basic results (ability to translate from pseudocausal to causal, dynamic consistency, capturing most of UDT, definition of learning) will still hold up.

Yes, your current understanding is correct, it's rebuilding probability theory in more generality to be suitable for RL in nonrealizable environments, and capturing a much broader range of decision-theoretic problems, as well as whatever spin-off applications may come from having the basic theory worked out, like our infradistribution logic stuff.

It copes with unrealizability because its hypotheses are not probability distributions, but sets of probability distributions (actually more general than that, but it's a good mental starting point), corresponding to properties that reality may have, without fully specifying everything. In particular, if an agent learns a class of belief functions (read: properties the environment may fulfill) is learned, this implies that for all properties within that class that the true environment fulfills (you don't know the true environment exactly), the infrabayes agent will match or exceed the expected utility lower bound that can be guaranteed if you know reality has that property (in the low-time-discount limit)

There's another key consideration which Vanessa was telling me to put in which I'll post in another comment once I fully work it out again.

Also, thank you for noticing that it took a lot of work to write all this up, the proofs took a while. n_n

Comment by Diffractor on Less Basic Inframeasure Theory · 2020-12-26T08:58:29.955Z · LW · GW

So, we've also got an analogue of KL-divergence for crisp infradistributions. 

We'll be using  and  for crisp infradistributions, and  and  for probability distributions associated with them.  will be used for the KL-divergence of infradistributions, and  will be used for the KL-divergence of probability distributions. For crisp infradistributions, the KL-divergence is defined as

I'm not entirely sure why it's like this, but it has the basic properties you would expect of the KL-divergence, like concavity in both arguments and interacting well with continuous pushforwards and semidirect product.

Straight off the bat, we have:

 

Proposition 1: 

Proof: KL-divergence between probability distributions is always nonnegative, by Gibb's inequality.

 

Proposition 2: 

And now, because KL-divergence between probability distributions is 0 only when they're equal, we have:

 

Proposition 3: If  is the uniform distribution on , then 

And the cross-entropy of any distribution with the uniform distribution is always , so:

 

Proposition 4:  is a concave function over .

Proof: Let's use  as our number in  in order to talk about mixtures. Then,

Then we apply concavity of the KL-divergence for probability distributions to get:

 

Proposition 5: 


At this point we can abbreviate the KL-divergence, and observe that we have a multiplication by 1, to get:

And then pack up the expectation

Then, with the choice of  and  fixed, we can move the choice of the  all the way inside, to get:

Now, there's something else we can notice. When choosing , it doesn't matter what  is selected, you want to take every  and maximize the quantity inside the expectation, that consideration selects your . So, then we can get:

And pack up the KL-divergence to get:

And distribute the min to get:

And then, we can pull out that fixed quantity and get:

And pack up the KL-divergence to get:

 

Proposition 6: 

To do this, we'll go through the proof of proposition 5 to the first place where we have an inequality. The last step before inequality was:

Now, for a direct product, it's like semidirect product but all the  and  are the same infradistribution, so we have:

Now, this is a constant, so we can pull it out of the expectation to get:

 

Proposition 7: 

For this, we'll need to use the Disintegration Theorem (the classical version for probability distributions), and adapt some results from Proposition 5. Let's show as much as we can before showing this.

Now, hypothetically, if we had

then we could use that result to get

and we'd be done. So, our task is to show

for any pair of probability distributions  and . Now, here's what we'll do. The  and  gives us probability distributions over , and the  and  are probability distributions over . So, let's take the joint distribution over  given by selecting a point from  according to the relevant distribution and applying . By the classical version of the disintegration theorem, we can write it either way as starting with the marginal distribution over  and a semidirect product to , or by starting with the marginal distribution over  and you take a semidirect product with some markov kernel to  to get the joint distribution. So, we have:

for some Markov kernels . Why? Well, the joint distribution over  is given by  or  respectively (you have a starting distribution, and  lets you take an input in  and get an output in ). But, breaking it down the other way, we start with the marginal distribution of those joint distributions on  (the pushforward w.r.t. ), and can write the joint distribution as semidirect product going the other way. Basically, it's just two different ways of writing the same distributions, so that's why KL-divergence doesn't vary at all.

Now, it is also a fact that, for semidirect products (sorry, we're gonna let  be arbitrary here and unconnected to the fixed ones we were looking at earlier, this is just a general property of semidirect products), we have:

To see this, run through the proof of Proposition 5, because probability distributions are special cases of infradistributions. Running up to right up before the inequality, we had

But when we're dealing with probability distributions, there's only one possible choice of probability distribution to select, so we just have

Applying this, we have:


The first equality is our expansion of semidirect product for probability distributions, second equality is the probability distributions being equal, and third equality is, again, expansion of semidirect product for probability distributions. Contracting the two sides of this, we have:

Now, the KL-divergence between a distribution and itself is 0, so the expectation on the left-hand side is 0, and we have

And bam, we have  which is what we needed to carry the proof through.

Comment by Diffractor on CO2 Stripper Postmortem Thoughts · 2020-12-15T18:58:16.577Z · LW · GW

It is currently disassembled in my garage, will be fully tested when the 2.0 version is built, and the 2.0 version has had construction stalled for this year because I've been working on other projects. The 1.0 version did remove CO2 from a room as measured by a CO2 meter, but the size and volume made it not worthwhile.