post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by gjm · 2023-07-12T14:39:35.012Z · LW(p) · GW(p)

I am very much not an expert on this. But: I don't see why bet 1 "subjunctively dominates" bet 2.

Suppose I'm currently planning to take bet 2, and suppose PA is able to prove that. Then I am expecting to get +1 from the bet.

Now, suppose we consider switching the output of my algorithm from "bet 2" to "bet 1". Then, counterfactually, PA will no longer prove that I take bet 2, so I now expect to be taking bet 1 in the not-P case, for an outcome of -1.

This is not better than the +1 I am currently expecting to get by taking bet 2.

What am I missing? (My best guess is that you reckon the comparison I should be doing isn't -1 after switching versus +1 before switching, but -1 after switching versus -10 from still taking bet 2 after changing the output of my algorithm, but if so then I don't understand why that would be the right comparison to make.)

Replies from: Sylvester Kollin
comment by Sylvester Kollin · 2023-07-13T15:03:18.041Z · LW(p) · GW(p)

To be sure, switching to Bet 1 is great evidence that  is true (that's the whole point), but that's not the sort of reasoning FDT recommends. Rather, the question is if we take the Peano axioms to be downstream of the output of the algorithm in the relevant sense. 

As the authors make clear, FDT is supposed to be "structurally similar" to CDT [1], and in the same way CDT regards the history and the laws to be out of their control in Ahmed's problems, FDT should arguably regard the Peano axioms to be out of their control (i.e., "upstream" of the algorithm). What could be more upstream?

  1. ^

    Levinstein and Soares write (page 2): "FDT is structurally similar to CDT, but it rectifies this mistake by recognizing that logical dependencies are decision-relevant as well.".

Replies from: Zane, gjm
comment by Zane · 2023-07-13T16:10:50.969Z · LW(p) · GW(p)

But wouldn't what Peano is capable of proving about your specific algorithm necessarily be "downstream" of the output of that algorithm itself? The Peano axioms are upstream, yes, but what Peano proves about a particular function depends on what that function is.

comment by gjm · 2023-07-13T16:36:30.392Z · LW(p) · GW(p)

I think maybe we're running into the problem that FDT isn't (AIUI) really very precisely defined. But I think I agree with Zane's reply to your comment: two (apparently) possible worlds where my algorithm produces different decisions are also worlds where PA proves that it does (or at least they might be; PA can't prove everything that's true)  because those are worlds where I'm running different algorithms. And unless I'm confused (which I very much might be) that's much of the point of FDT: we recognize different decisions as being consequences of running different algorithms.

comment by Zane · 2023-07-12T22:33:42.757Z · LW(p) · GW(p)

I would think that FDT chooses Bet 2, unless I'm misunderstanding something about the role of Peano Arithmetic here. Taking Bet 2 results in P being true, and vice versa for Bet 1; therefore, the only options that are actually possible are the bottom left and the top right.

In fact, this seems like the exact sort of situation in which FDT can be easily shown to outperform CDT. CDT would reason along the lines of "Bet 1 is better if P is true, and better if P is false, and therefore better overall" without paying attention to the direct dependency between the output of your decision algorithm and the truth value of P.

I'm not quite sure what Yudkowsky and Soares meant by "dominance" there. I'd guess on priors that they meant FDT pays attention to those dependencies when deciding whether one strategy outperforms another... but yeah, they kind of worded it in a way that suggests the opposite interpretation.

Replies from: Sylvester Kollin
comment by Sylvester Kollin · 2023-07-13T15:04:40.236Z · LW(p) · GW(p)

(See my response to gjm's comment.)

comment by Charlie Steiner · 2023-07-12T20:42:28.586Z · LW(p) · GW(p)

I generally think of FDT as taking a causal model of the world and augmenting it with "logical nodes" (that have to be placed in a common-sense, non-systematic way, which is an issue with FDT). Whether or not some FDT agent regards "bet on 1 while PA proves I pick 2" as an option depends on how you've set up the logical nodes in your augmented model.

If the agent evaluates actions by pretending to control a logical node that's upstream of both its own action and PA proofs about its action (which is pretty reasonable), then "bet on 1 while PA proves I pick 2" is not a counterfactual it ever considers, and FDT picks 2.

Replies from: Sylvester Kollin
comment by Sylvester Kollin · 2023-07-13T15:06:20.389Z · LW(p) · GW(p)

Right, but it's fairly clear to me that this is not what the authors have in mind. For example, they cite Bjerring (2014), who proposes very specific and precise extensions of the Lewis-Stalnaker semantics.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2023-07-13T17:56:03.333Z · LW(p) · GW(p)

It's fairly clear to me that the authors do not have any specific and precise method in mind, Bjerring or no Bjerring.

From the paper:

While we do not yet have a satisfying account of how to perform counterpossible reasoning in practice, the human brain shows that reasonable heuristics exist.

 

Unfortunately, it’s not clear how to define a true operator. 

In fact, any agent-independent rule for construction of counterpossibles is doomed, because different questions can cause the same mathematical change to produce different imagined results. What mathematical propositions get chosen to be "upstream" or "downstream" has to depend on what you're thinking of as "doing the changing" or "doing the reacting" for the question at hand.

This is important both normatively (e.g. if you were somehow designing an AI that used FDT), and also to understand how humans reason about thought experiments - by constructing the counterfactuals in response to the proposed thought experiment.

Replies from: Sylvester Kollin
comment by Sylvester Kollin · 2023-07-14T10:48:06.498Z · LW(p) · GW(p)

It's fairly clear to me that the authors do not have any specific and precise method in mind, Bjerring or no Bjerring.

Of course they don't have a specific proposal in the paper. I'm just saying that it seems like they would want to be more precise, or that a full specification requires more work on counterpossibles (which you seem to be arguing against). From the abstract:

While not necessary for considering classic decision theory problems, we note that a full specification of FDT will require a non-trivial theory of logical counterfactuals and algorithmic similarity.

...

What mathematical propositions get chosen to be "upstream" or "downstream" has to depend on what you're thinking of as "doing the changing" or "doing the reacting" for the question at hand.

If this is in fact how we should think about FDT, the theory becomes very uninteresting since it seems like you can then just get whatever recommendations you want from it.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2023-07-14T11:10:10.483Z · LW(p) · GW(p)

If this is in fact how we should think about FDT, the theory becomes very uninteresting since it seems like you can then just get whatever recommendations you want from it.

Well, just because something is vague and relies on common sense, doesn't mean you can get whatever answer you want from it. 

And there's still plenty of progress to be made in formalizing FDT - it's just that a formalization of an FDT agent isn't going to reference some agent-independent way of computing counterpossibles. Instead it's going to have to contain standards for how best to compute counterpossibles on the fly in response to the needs of the moment.

comment by ProgramCrafter (programcrafter) · 2023-07-12T15:02:16.892Z · LW(p) · GW(p)

This is pretty equivalent to the original Newcomb problem (https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality [LW · GW]).

FDT is not false, it's just not applicable as it has precondition that the agent's reasoning process and decision do not influence the probability balance between P and not-P.

comment by Robin Richtsfeld (robin-richtsfeld) · 2023-07-12T18:38:18.697Z · LW(p) · GW(p)

A material conditional P --> Q is true unless P is true and Q is false.

The proposition P --> Q can be true even if P is false (Q must be false).
The proposition P --> Q can be false even if P is true (Q may be true or false).

You may assume the Peano Axioms to be true or to be false, there is no right or wrong.