Posts

Is backwards causation necessarily absurd? 2020-01-14T19:25:44.419Z · score: 16 (7 votes)
The Universe Doesn't Have to Play Nice 2020-01-06T02:08:54.406Z · score: 16 (6 votes)
Theories That Can Explain Everything 2020-01-02T02:12:28.772Z · score: 9 (3 votes)
The Counterfactual Prisoner's Dilemma 2019-12-21T01:44:23.257Z · score: 19 (7 votes)
Counterfactual Mugging: Why should you pay? 2019-12-17T22:16:37.859Z · score: 4 (2 votes)
Counterfactuals: Smoking Lesion vs. Newcomb's 2019-12-08T21:02:05.972Z · score: 9 (4 votes)
What is an Evidential Decision Theory agent? 2019-12-05T13:48:57.981Z · score: 10 (3 votes)
Counterfactuals as a matter of Social Convention 2019-11-30T10:35:39.784Z · score: 11 (3 votes)
Open-Box Newcomb's Problem and the limitations of the Erasure framing 2019-11-28T11:32:11.870Z · score: 6 (3 votes)
Acting without a clear direction 2019-11-23T19:19:11.324Z · score: 9 (4 votes)
Book Review: Man's Search for Meaning by Victor Frankel 2019-11-04T11:21:05.791Z · score: 18 (8 votes)
Economics and Evolutionary Psychology 2019-11-02T16:36:34.026Z · score: 12 (4 votes)
What are human values? - Thoughts and challenges 2019-11-02T10:52:51.585Z · score: 13 (4 votes)
When we substantially modify an old post should we edit directly or post a version 2? 2019-10-11T10:40:04.935Z · score: 13 (4 votes)
Relabelings vs. External References 2019-09-20T02:20:34.529Z · score: 13 (4 votes)
Counterfactuals are an Answer, Not a Question 2019-09-03T15:36:39.622Z · score: 14 (11 votes)
Chris_Leong's Shortform 2019-08-21T10:02:01.907Z · score: 11 (2 votes)
Emotions are not beliefs 2019-08-07T06:27:49.812Z · score: 26 (9 votes)
Arguments for the existence of qualia 2019-07-28T10:52:42.997Z · score: -2 (17 votes)
Against Excessive Apologising 2019-07-19T15:00:34.272Z · score: 7 (5 votes)
How does one get invited to the alignment forum? 2019-06-23T09:39:20.042Z · score: 17 (7 votes)
Should rationality be a movement? 2019-06-20T23:09:10.555Z · score: 53 (22 votes)
What kind of thing is logic in an ontological sense? 2019-06-12T22:28:47.443Z · score: 13 (4 votes)
Dissolving the zombie argument 2019-06-10T04:54:54.716Z · score: 1 (5 votes)
Visiting the Bay Area from 17-30 June 2019-06-07T02:40:03.668Z · score: 18 (4 votes)
Narcissism vs. social signalling 2019-05-12T03:26:31.552Z · score: 15 (7 votes)
Natural Structures and Definitions 2019-05-01T00:05:35.698Z · score: 21 (8 votes)
Liar Paradox Revisited 2019-04-17T23:02:45.875Z · score: 11 (3 votes)
Agent Foundation Foundations and the Rocket Alignment Problem 2019-04-09T11:33:46.925Z · score: 13 (5 votes)
Would solving logical counterfactuals solve anthropics? 2019-04-05T11:08:19.834Z · score: 23 (-2 votes)
Is there a difference between uncertainty over your utility function and uncertainty over outcomes? 2019-03-18T18:41:38.246Z · score: 15 (4 votes)
Deconfusing Logical Counterfactuals 2019-01-30T15:13:41.436Z · score: 26 (9 votes)
Is Agent Simulates Predictor a "fair" problem? 2019-01-24T13:18:13.745Z · score: 22 (6 votes)
Debate AI and the Decision to Release an AI 2019-01-17T14:36:53.512Z · score: 10 (4 votes)
Which approach is most promising for aligned AGI? 2019-01-08T02:19:50.278Z · score: 6 (2 votes)
On Abstract Systems 2019-01-06T23:41:52.563Z · score: 15 (9 votes)
On Disingenuity 2018-12-26T17:08:47.138Z · score: 33 (16 votes)
Best arguments against worrying about AI risk? 2018-12-23T14:57:09.905Z · score: 15 (7 votes)
What are some concrete problems about logical counterfactuals? 2018-12-16T10:20:26.618Z · score: 26 (6 votes)
An Extensive Categorisation of Infinite Paradoxes 2018-12-13T18:36:53.972Z · score: -2 (23 votes)
No option to report spam 2018-12-03T13:40:58.514Z · score: 38 (14 votes)
Summary: Surreal Decisions 2018-11-27T14:15:07.342Z · score: 27 (6 votes)
Suggestion: New material shouldn't be released too fast 2018-11-21T16:39:19.495Z · score: 24 (8 votes)
The Inspection Paradox is Everywhere 2018-11-15T10:55:43.654Z · score: 26 (7 votes)
One Doubt About Timeless Decision Theories 2018-10-22T01:39:57.302Z · score: 15 (7 votes)
Formal vs. Effective Pre-Commitment 2018-08-27T12:04:53.268Z · score: 11 (6 votes)
Decision Theory with F@#!ed-Up Reference Classes 2018-08-22T10:10:52.170Z · score: 10 (3 votes)
Logical Counterfactuals & the Cooperation Game 2018-08-14T14:00:34.032Z · score: 17 (7 votes)
A Short Note on UDT 2018-08-08T13:27:12.349Z · score: 11 (4 votes)
Counterfactuals for Perfect Predictors 2018-08-06T12:24:49.624Z · score: 13 (5 votes)

Comments

Comment by chris_leong on Have epistemic conditions always been this bad? · 2020-01-25T14:35:52.360Z · score: 23 (12 votes) · LW · GW

I believe that the main reason why this hasn't been discussed in any depth on Less Wrong is a) the norm set up by Elizier Yudkowsky in Politics is the Mindkiller b) The Motte has become the default rationalsphere adjacent location for this kind of discussion. That said, it's plausible that the situation has reached the point where this topic can no longer be avoided.

Comment by chris_leong on Chris_Leong's Shortform · 2020-01-20T16:54:59.937Z · score: 7 (4 votes) · LW · GW

There appears to be something of a Sensemaking community developing on the internet, which could roughly be described as a spirituality-inspired attempt at epistemology. This includes Rebel Wisdom, Future Thinkers, Emerge and maybe you could even count post-rationality. While there are undoubtedly lots of critiques that could be made of their epistemics, I'd suggest watching this space as I think some interesting ideas will emerge out of it.

Comment by chris_leong on Reality-Revealing and Reality-Masking Puzzles · 2020-01-18T00:40:54.533Z · score: 5 (3 votes) · LW · GW

You are talking about it as though it is a property of the puzzle, when it seems likely to be an interaction between the person and puzzle

Comment by chris_leong on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T23:37:26.817Z · score: 6 (4 votes) · LW · GW

Interesting post, although I wish "reality-masking" puzzles had been defined better. Most of this post is around disorientation pattern or disabling parts of the epistemic immune system more than anything directly masking reality.

Also related: Pseudo-rationality

Comment by chris_leong on Is backwards causation necessarily absurd? · 2020-01-15T00:55:47.974Z · score: 2 (1 votes) · LW · GW

"That said, I don't think we are really explaining or de-confusing anything if we appeal to backwards causation to understand Newcomb's Problem or argue for a particular solution to it." - How come?

Comment by chris_leong on Is backwards causation necessarily absurd? · 2020-01-15T00:53:34.950Z · score: 2 (1 votes) · LW · GW

"Relativity does not make the arrow of time relative to observer" - I didn't say that. I said there was no unified notion of the present

Comment by chris_leong on Why a New Rationalization Sequence? · 2020-01-14T17:46:32.548Z · score: 2 (1 votes) · LW · GW

Maybe you can't dream the actual process of factoring a large number, but you can dream of having just finished completing such a factoring with the result having come out correct

Comment by chris_leong on Dissolving Confusion around Functional Decision Theory · 2020-01-06T20:46:53.977Z · score: 3 (2 votes) · LW · GW

I liked the diagrams as I think they'll be clarifying to most people. However, in response to:

I think that many proponents of FDT fail to make this point: FDT’s advantage is that it shifts the question to what type of agent you want to be--not misleading questions of what types of “choices” you want to make

FDT involves choosing in observation-action mapping which is effectively the same as choosing an algorithm if the reward doesn't ever depend on why you make a particular decision and the mapping space is finite

One problem with trying to model it as what algorithm to run is that you are running code to select the algorithm so if the actual algorithm ever matters as opposed to just the observation-action map, you'd need to take the selection algorithm into account.



Comment by chris_leong on The Universe Doesn't Have to Play Nice · 2020-01-06T20:33:53.958Z · score: 2 (1 votes) · LW · GW

The point about simulations was merely to show that the idea of a universe with the majority of consciousness being Boltzmann Brains isn't absurd

Comment by chris_leong on The Universe Doesn't Have to Play Nice · 2020-01-06T11:20:12.342Z · score: 4 (2 votes) · LW · GW

I'm sure I'll link back to this post soon. But this post is motivated by a few things such as:

a) Disagreements over consciousness - if non-materialist qualia existed, then we wouldn't be able to know about them empirically, but the universe doesn't have to play nice and make all phenomenon accessible to our scientific instruments, so we should have more uncertainty about this than people generally possess

b) The theories that can explain everything post - as nice as it'd be to just be able to evaluate theories empirically, there's no reason why we can't have a theory that is important for determining expectations, which isn't cleanly falsifiable

Comment by chris_leong on Normalization of Deviance · 2020-01-04T01:35:45.751Z · score: 4 (2 votes) · LW · GW

I thought it might be useful to give an example of when normalisation of deviance is functional. Let's suppose that a hospital has to treat patients, but because of short-staffing there would be no way of filling out all of the paperwork properly whilst treating all the patients, so the doctors don't fill out all of the fields.

It's also important to mention the possibility of scapegoating - perhaps the deviance is justified and practically everyone is working in that manner, but if something goes wrong you may be blamed anyway. So it's very important to take this small chance of an extremely harsh punishment into account.

Comment by chris_leong on Theories That Can Explain Everything · 2020-01-03T23:54:27.839Z · score: 2 (1 votes) · LW · GW

Interesting idea. What is the use of organising beliefs without updating them?

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-03T15:01:05.525Z · score: 2 (1 votes) · LW · GW
My argument is that we ARE thinking ahead about counterfactual mugging right now, in considering the question

When we think about counterfactual muggings, we naturally imagine the possibility of facing a counterfactual mugging in the future. I don't dispute the value of pre-committing either to take a specific action or to acting updatelessly. However, instead of imagining a future mugging, we could also imagine a present mugging where we didn't have time to make any pre-commitments. I don't think it is immediately obvious that we should think updatelessly, instead I believe that it requires further justification.

The role of thinking about decision theory now is to help guide the actions of my future self

This is effectively an attempt at proof-by-definition

I think the average person is going to be thinking about things like duty, honor, and consistency which can serve some of the purpose of updatelessness. But sure, updateful reasoning is a natural kind of starting point, particularly coming from a background of modern economics or bayesian decision theory

If someone's default is already updateless reasoning, then there's no need for us to talk them into it. It's only people with an updateful default that we need to convince (until recently I had an updateful default).

And when we think about problems like counterfactual mugging, the description of the problem requires that there's both the possibility of heads and tails

It requires a counterfactual possibility, not an actual possibility. And a counterfactual possibility isn't actual, it's counter to the factual. So it's not clear this has any relevance.

It looks like to me that you're tripping yourself up with verbal arguments that aren't at all obviously true. The reason why I believe that the Counterfactual Prisoner's Dilemma is important is because it is a mathematical result that doesn't require much in the way of assumptions. Sure, it still has to be interpreted, but it seems hard to find an interpretations that avoids the conclusion that the updateful perspective doesn't quite succeed on its own terms.

Comment by chris_leong on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T00:52:44.719Z · score: 15 (6 votes) · LW · GW

Have you considered cross-posting this to the EA forum?

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-02T12:52:53.293Z · score: 2 (1 votes) · LW · GW

You feel that I'm begging the question. I guess I take only thinking about this counterfactual as the default position, as where an average person is likely to be starting from. And I was trying to see if I could find an argument strong enough to displace this. So I'll freely admit I haven't provided a first-principles argument for focusing just on this counterfactual.

OK, but I don't see how that addresses my argument.

Your argument is that we need to look at iterated situations to understand learning. Sure, but that doesn't mean that we have to interpret every problem in iterated form. If we need to understand learning better, we can look at a few iterated problems beforehand, rather than turning this one into an iterated problems.

The average includes worlds that you know you are not in. So this doesn't help us justify taking these counterfactuals into account,

Let me explain more clearly why this is a circular argument:

a) You want to show that we should take counterfactuals into account when making decisions

b) You argue that this way of making decisions does better on average

c) The average includes the very counterfactuals whose value is in question. So b depends on a already being proven => circular argument

Comment by chris_leong on Theories That Can Explain Everything · 2020-01-02T12:24:59.428Z · score: 2 (1 votes) · LW · GW

Saying anything is possible is a prediction, but a trivial prediction. Nonetheless, it changes expectations if before only A seemed possible.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-02T00:24:16.567Z · score: 2 (1 votes) · LW · GW
The argument is that we have to understand learning in the first place to be able to make these arguments, and iterated situations are the easiest setting to do that in

Iterated situations are indeed useful for understanding learning. But I'm trying to abstract out over the learning insofar as I can. I care that you get the information required for the problem, but not so much how you get it.

Especially when we can see that the latter way of reasoning does better on average?

The average includes worlds that you know you are not in. So this doesn't help us justify taking these counterfactuals into account, indeed for us to care about the average we need to already have an independent reason to care about these counterfactuals.

I kind of feel like you're just repeatedly denying this line of reasoning. Yes, the situation in front of you is that you're in the risk-hand world rather than the risk-life world. But this is just question-begging with respect to updateful reasoning.

I'm not saying you should reason in this way. You should reason updatelessly. But in order to get to the point of finding the Counterfactual Prisoner's Dilemma, while I consider a satisfactory justification, I had rigorously question every other solution until I found one which could withstand the questioning. This seems like a better solution as it is less dependent on tricky to evaluate philosophical claims.

Ah, that's kind of the first reply from you that's surprised me in a bit

Well, thinking about a decision after you make it won't do you much good. So you're pretty always thinking about decisions before you make them. But timelessness involves thinking about decision before you end up facing them.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-01T14:30:43.719Z · score: 1 (2 votes) · LW · GW
If an agent is really in a pure one-shot case, that agent can do anything at all

You can learn about a situation other than by facing that exact situation yourself. For example, you may observe other agents facing that situation or receive testimony from an agent that has proven itself trustworthy. You don't even seem to disagree with me here as you wrote: "you can learn enough about the universe to be confident you're now in a counterfactual mugging without ever having faced one before"

"This goes along with the idea that it's unreasonable to consider agents as if they emerge spontaneously from a vacuum, face a single decision problem, and then disappear" - I agree with this. I asked this question because I didn't have a good model of how to conceptualise decision theory problems, although I think I have a clearer idea now that we've got the Counterfactual Prisoner's Dilemma.

One way of appealing to human moral intuition

Doesn't work on counter-factually selfish agents

Decision theory should be reflectively endorsed decision theory. That's what decision theory basically is: thinking we do ahead of time which is supposed to help us make decisions

Thinking about decisions before you make them != thinking about decisions timelessly

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-30T22:16:35.253Z · score: 2 (1 votes) · LW · GW

"Basically I keep talking about how "yes you can refuse a finite number of muggings"" - considering I'm considering the case when you are only mugged once, that sounds an awful lot like saying it's reasonable to choose not to pay.

"But if I've considered things ahead of time" - a key part of counterfactual mugging is that you haven't considered things ahead of time. I think it is important to engage with this aspect or explain why this doesn't make sense.

"And further, there's the classic argument that you should always consider what you would have committed to ahead of time" - imagine instead of $50 it was your hand being cut off to save your life in the counterfactual. It's going to be awfully tempting to keep your hand. Why is what you would have committed to, but didn't relevant?

My goal is to understand versions that haven't been watered down or simplified.


Comment by chris_leong on Speaking Truth to Power Is a Schelling Point · 2019-12-30T12:50:09.481Z · score: 10 (4 votes) · LW · GW

TLDR: When you engage in intellectual dishonesty due to social pressure, you distort your perspective of the world in a way that makes further intellectual dishonesty seem justified. This results in a downwards spiral.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-30T12:39:32.047Z · score: 2 (1 votes) · LW · GW
I'm not sure, but it seems like our disagreement might be around the magnitude of this somehow

My interest is in the counterfactual mugging in front of you, as this is the hardest part to justify. Future muggings aren't a difficult problem.

Basically, I don't think that way of thinking completely holds when we're dealing with logical uncertainty. A counterlogical mugging is a situation where time to think can, in a certain sense, hurt (if you fully update on that thinking, anyway)

Are you saying that it will pre-commit to something before it receives all the information?

Comment by chris_leong on Stupidity and Dishonesty Explain Each Other Away · 2019-12-30T12:22:43.699Z · score: 2 (1 votes) · LW · GW

"People offering up bullshit statements are often more than intelligent enough to get the right answer, but they're just not motivated to do so because they're trying to optimize for something that's orthogonal to truth." - well that's how it is at first, but after a while they'll more than likely end up tripping themselves up and start believing a lot of what they've been saying

Comment by chris_leong on Stupidity and Dishonesty Explain Each Other Away · 2019-12-29T00:29:59.777Z · score: 10 (3 votes) · LW · GW

I get what you're saying, but there's a loophole. An extreme amount of intellectual dishonesty can mean that someone just doesn't care about the truth at all and so they end up believing things that are just stupid. So it's not that they are lying, they just don't care enough to figure out the truth. In other words, they can be both extremely dishonest and believe utterly stupid things and maybe they aren't technically stupid since they haven't tried very hard to figure out what's true or false, but that's something of a cold comfort. Actually, it's worse than that, because they have all these beliefs that weren't formed rationally, when they do actually try to think rationally about the world, they'll end up using these false beliefs to come to unjustified conclusions.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-24T12:35:55.900Z · score: 2 (1 votes) · LW · GW
It seems to me as if you're ignoring the general thrust of my position, which is that the notion of winning that's important is the one we have in-hand when we are thinking about what decision procedure to use

Why can't I use this argument for CDT in Newcomb's?

It seems right to focus on future actions, because those are the ones which our current thoughts about which decision theory to adopt will influence.

What I meant to say instead of future actions is that it is clear that we should commit to UDT for future muggings, but less clear if the mugging was already set up.

I think that since no agent can be perfect from the start, we always have to imagine that an agent will make some mistakes before it gets on the right track

The agent should still be able to solve such scenarios given a sufficient amount of time to think and the necessary starting information. Such as reliable reports about what happened to others who encountered counterfactual muggers

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-22T01:54:58.276Z · score: 2 (1 votes) · LW · GW

See here: https://www.lesswrong.com/posts/2THFt7BChfCgwYDeA/let-s-discuss-functional-decision-theory#XvXn5NXNgdPLDAabQ

Comment by chris_leong on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-22T00:12:17.180Z · score: 7 (5 votes) · LW · GW

One worry I have about having a norm of not discussing politics is that sometimes politics will affect the rationality community. For example, the rationality community will have to deal with claims of harassment or making people feeling unwelcome. How you handle that probably depends on your stance on feminism. So avoiding discussing politics could make these decisions worse. (That said, I'm not actually claiming that discussing these issues would lead to an improvement, rather than everyone just sticking to whatever view they had before)

Comment by chris_leong on The Counterfactual Prisoner's Dilemma · 2019-12-21T12:16:31.657Z · score: 2 (1 votes) · LW · GW

We can assume that the coin is flipped out of your sight.

Comment by chris_leong on The Counterfactual Prisoner's Dilemma · 2019-12-21T11:12:56.865Z · score: 4 (2 votes) · LW · GW

Only if the criminal messes up their expected utility calculation

Comment by chris_leong on The Counterfactual Prisoner's Dilemma · 2019-12-21T11:07:08.930Z · score: 4 (2 votes) · LW · GW

"How about "Omega knows whether you would pay in the counterfactual mugging setup if told that you had lost and will reward you for paying if you lose, but you don't know that you would get rewarded once you pay up". Is there anything I have missed?" - you aren't told that you "lost" as there is no losing coin flip in this scenario since it is symmetric. You are told which way the coin came up. Anyway, I updated the post to clarify this

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-20T17:42:08.606Z · score: 2 (1 votes) · LW · GW
You grant that this makes sense for that case, but you seem skeptical that the same reasoning applies to humans

I ultimately don't see much of a distinction between humans and AIs, but let me clarify. If we had the ability to perfectly pre-commit then we'd make pre-commitments that effectively would be the same as an AI self-modifying. Without this ability, this argument is slightly harder to make, but I think it still applies. I've attempted making it in the past although I don't really feel I completely succeeded.

Ah, it seems any case where a decision procedure would prefer to make a commitment ahead of time but would prefer to do something different in the moment is a point against that decision procedure

I agree that it's a point against the decision procedure, but isn't necessarily conclusive. This could persuade someone to part with $100, but maybe not to allow themselves to be tortured.

You might argue that beliefs are for true things, so I can't legitimately discount ways-of-thinking just because they have bad consequences

I actually agree with Elizier's argument that winning is more important than abstract conventions of thought. It's just it's not always clear which option is winning. Indeed here, as I've argued, winning seems to match more directly to not paying and abstract conventions of thought to the arguments about the counterfactual.

A CDT or EDT agent who asks itself how best to act in future situations to maximize expected value as estimated by its current self will arrive at UDT

Yeah I'm not disputing pre-committing to UDT for future actions, the question is more difficult when it comes to past actions. One thought: even if you're in a counterfactual mugging that was set up before you came into existence, before you learn about it you might have time to pre-commit to paying in any such situations.

However, I do agree that even according to UDT there's a subjective question of how much information should be incorporated into the prior

Well, this is the part of the question I'm interested in. As I said, I have no objection to pre-committing to UDT for future actions

If we are being simulated, then the other self (in a position to get $1000) really does exist

I've commented on this in the past and I still see this as imprecise reasoning. I think I should write a post addressing this directly soon

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-20T11:57:37.258Z · score: 5 (2 votes) · LW · GW

Actually, the counterfactual agent makes a different observation (heads instead of tails) so their actions aren't necessarily linked

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T23:41:42.801Z · score: 2 (1 votes) · LW · GW

Of course

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T23:25:29.068Z · score: 2 (1 votes) · LW · GW

No. I don't know the accuracy of the prediction. It's just that I already know the result of the coin flip.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T21:31:34.933Z · score: 3 (2 votes) · LW · GW

Do you think you'll write a post on it? Because I was thinking of writing a post, but if you were planning on doing this then that would be even better as it would probably get more attention.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T15:26:27.836Z · score: 2 (1 votes) · LW · GW

"Imagine that before being faced with counterfactual mugging, the agent can make a side bet on Omega's coin" - I don't know if that works. Part of counterfactual mugging is that you aren't told before the problem that you might be mugged, otherwise you could just pre-commit.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T15:21:16.074Z · score: 5 (2 votes) · LW · GW

Yeah, I actually stumbled upon this argument myself this morning. Has anyone written this up beyond this comment as this seems like the most persuasive argument for paying? This suggests that never caring is not a viable position.

I was thinking today about whether there are any intermediate positions, but I don't think they are viable. Only caring about counterfactuals when you have a prisoner's dilemma-like situation seems an unprincipled fudge.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T11:04:06.132Z · score: 2 (1 votes) · LW · GW

If you are in this situation you have the practical reality that paying the $100 loses you $100 and a theoretical argument that you should pay anyway. If you apply "just ask which algorithm wins" and you mean the practical reality of the situation described, then you wouldn't choose UDT. If you instead take "just ask which algorithm wins" to mean setting up an empirical experiment, then you'd have to decide whether to consider all agents who encounter the coin flip, or only those who see a tails, at which point there is no need to run the experiment. If you instead are proposing figuring out which algorithm wins according to theory, then that's a bit of a tautology as that's what I'm already trying to do.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T10:57:00.407Z · score: 2 (1 votes) · LW · GW

I guess my argument is based on imagining at the start that agents either can care about counterfactual selves or not. But agents that don't are a bit controversial, so let's imagine such an agent and see if we run into any issues. So imagine a consistent agent that doesn't care about counterfactual selves except insofar as they "could be it" from its current epistemic position. I can't see any issues with this - it seems consistent. And my challenge is for you to answer why this isn't a valid set of values to have.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T10:48:51.152Z · score: 2 (1 votes) · LW · GW

Good point about risk also being a factor, but just the point in question isn't how to perform an expected utility calculation, but the justification of it

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-19T10:47:13.870Z · score: 2 (1 votes) · LW · GW

Sounds testable in theory, but not in practise

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-18T16:17:04.184Z · score: 2 (1 votes) · LW · GW

Interesting. Do you taken caring about counterfactual selves as foundational - in the sense that there is no why, you either do or do not?

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2019-12-18T16:15:50.738Z · score: 2 (1 votes) · LW · GW

Again, we seem to just have foundational disagreements here. Free will is one of those philosophical topics that I lost interest in a long time ago, so I'm happy to leave it to others to debate.

Comment by chris_leong on Two Types of Updatelessness · 2019-12-17T23:06:37.789Z · score: 2 (1 votes) · LW · GW

I also believe that mixed-upside updatelessness is more complex than is often presented and I'm planning to delve more into this. In fact, I just posted a question asking why you should pay in Counterfactual Mugging. Have you had any more thoughts about this since?

Comment by chris_leong on Affordance Widths · 2019-12-11T16:03:25.408Z · score: 1 (4 votes) · LW · GW

I would like to see a post on this concept included in the best of 2018, but I also agree that there are reputational risks given the author. I'd like to suggest possible compromise - perhaps we could include the concept, but write our own explanation of the concept instead of including this article?

Comment by chris_leong on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-11T00:54:33.795Z · score: 2 (1 votes) · LW · GW

Smoking lesion is an interesting problem in that it's really not that well defined. If an FDT agent is making the decision, then its reference class should be other FDT agents, so all agents in the same class make the same decision, contrary to the lesion which should affect the probability. The approach that both of us take is to break the causal link from the lesion to your decision. I really didn't express my criticism well above, because what I said also kind of applies to my post. However, the difference is that you are engaging in world counting and in world counting you should see the linkage, while my approach involves explicitly reinterpreting the problem to break the linkage. So my issue is that there seems to be some preprocessing happening before world counting and this means that your approach isn't just a matter of world counting as you claim. In other words, it doesn't match the label on the tin.

Comment by chris_leong on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-10T01:10:19.358Z · score: 2 (1 votes) · LW · GW

I know that you removed the $1000 in that case. But what is the general algorithm or rule that causes you to remove the $1000? What if the hospital cost $999 if you chose $1 or $1000 otherwise.

I guess it seems to me that once you've removed the $1000 you've removed the challenging element of the problem, so solving it doesn't count for very much.

Comment by chris_leong on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-10T01:07:48.152Z · score: 2 (1 votes) · LW · GW

Admittedly they could have been clearer, but I still think you're misinterpreting the FDT paper. Sorry, what I meant was that smoking was correlated with an increased chance of cancer. Not that there was any causal link.

Comment by chris_leong on Decoupling vs Contextualising Norms · 2019-12-10T01:02:30.976Z · score: 4 (2 votes) · LW · GW

I really don't like the term jumbled as some people would likely object much more to being labelled as jumbled than as a contextualiser. The rest of this comment makes some good points, but sometimes less is more. I do want to edit this article, but I think I'll mostly engage with Zack's points and reread the article.

Comment by chris_leong on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-09T20:13:51.160Z · score: 2 (1 votes) · LW · GW

Yeah, there's definitely a tension between being a social-linguistic construct and being pragmatically useful (such as what you might need for a planning agent). I don't completely know how to resolve this yet, but this post makes a start by noting that in additional to the social linguistic elements, the strength of the physical linkage between elements is important as well. My intuition is that there are a bunch of properties that make something more or less counterfactual and the social-linguistic conventions are about a) which of these properties are present when the problem is ambiguous b) which of these properties need to be satisfied before we accept a counterfactual as valid.

Comment by chris_leong on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-09T15:36:04.892Z · score: 2 (1 votes) · LW · GW

Yeah, I agree that I haven't completely engaged with the issue of "corrupted hardware", but it seems like any attempt to do this would require so much interpretation that I wouldn't expect to obtain agreement over whether I had interpreted it correctly. In any case, my aim is purely to solve counterfactuals for non-corrupted agents, at least for now. But glad to see that someone agrees with me about socio-linguistic conventions :-)