Posts

Referencing the Unreferencable 2020-04-04T10:42:08.164Z · score: 7 (1 votes)
The Hammer and the Dance 2020-03-20T16:09:26.740Z · score: 48 (13 votes)
A Sketch of Answers for Physicalists 2020-03-14T02:27:13.196Z · score: 24 (6 votes)
Vulnerabilities in CDT and TI-unaware agents 2020-03-10T14:14:54.530Z · score: 5 (5 votes)
Analyticity Depends On Definitions 2020-03-08T14:00:34.492Z · score: 10 (3 votes)
Embedded vs. External Decision Problems 2020-03-05T00:23:07.970Z · score: 9 (2 votes)
Abstract Plans Lead to Failure 2020-02-27T21:20:11.554Z · score: 22 (11 votes)
Stuck Exploration 2020-02-19T12:31:55.276Z · score: 16 (6 votes)
A Memetic Mediator Manifesto 2020-02-17T02:14:56.683Z · score: 12 (3 votes)
Reference Post: Trivial Decision Problem 2020-02-15T17:13:26.029Z · score: 17 (7 votes)
Is backwards causation necessarily absurd? 2020-01-14T19:25:44.419Z · score: 16 (7 votes)
The Universe Doesn't Have to Play Nice 2020-01-06T02:08:54.406Z · score: 17 (7 votes)
Theories That Can Explain Everything 2020-01-02T02:12:28.772Z · score: 9 (3 votes)
The Counterfactual Prisoner's Dilemma 2019-12-21T01:44:23.257Z · score: 20 (8 votes)
Counterfactual Mugging: Why should you pay? 2019-12-17T22:16:37.859Z · score: 5 (3 votes)
Counterfactuals: Smoking Lesion vs. Newcomb's 2019-12-08T21:02:05.972Z · score: 9 (4 votes)
What is an Evidential Decision Theory agent? 2019-12-05T13:48:57.981Z · score: 10 (3 votes)
Counterfactuals as a matter of Social Convention 2019-11-30T10:35:39.784Z · score: 11 (3 votes)
Transparent Newcomb's Problem and the limitations of the Erasure framing 2019-11-28T11:32:11.870Z · score: 6 (3 votes)
Acting without a clear direction 2019-11-23T19:19:11.324Z · score: 9 (4 votes)
Book Review: Man's Search for Meaning by Victor Frankel 2019-11-04T11:21:05.791Z · score: 18 (8 votes)
Economics and Evolutionary Psychology 2019-11-02T16:36:34.026Z · score: 12 (4 votes)
What are human values? - Thoughts and challenges 2019-11-02T10:52:51.585Z · score: 13 (4 votes)
When we substantially modify an old post should we edit directly or post a version 2? 2019-10-11T10:40:04.935Z · score: 13 (4 votes)
Relabelings vs. External References 2019-09-20T02:20:34.529Z · score: 13 (4 votes)
Counterfactuals are an Answer, Not a Question 2019-09-03T15:36:39.622Z · score: 14 (11 votes)
Chris_Leong's Shortform 2019-08-21T10:02:01.907Z · score: 11 (2 votes)
Emotions are not beliefs 2019-08-07T06:27:49.812Z · score: 26 (9 votes)
Arguments for the existence of qualia 2019-07-28T10:52:42.997Z · score: -2 (19 votes)
Against Excessive Apologising 2019-07-19T15:00:34.272Z · score: 7 (5 votes)
How does one get invited to the alignment forum? 2019-06-23T09:39:20.042Z · score: 17 (7 votes)
Should rationality be a movement? 2019-06-20T23:09:10.555Z · score: 53 (22 votes)
What kind of thing is logic in an ontological sense? 2019-06-12T22:28:47.443Z · score: 13 (4 votes)
Dissolving the zombie argument 2019-06-10T04:54:54.716Z · score: 1 (5 votes)
Visiting the Bay Area from 17-30 June 2019-06-07T02:40:03.668Z · score: 18 (4 votes)
Narcissism vs. social signalling 2019-05-12T03:26:31.552Z · score: 15 (7 votes)
Natural Structures and Definitions 2019-05-01T00:05:35.698Z · score: 21 (8 votes)
Liar Paradox Revisited 2019-04-17T23:02:45.875Z · score: 11 (3 votes)
Agent Foundation Foundations and the Rocket Alignment Problem 2019-04-09T11:33:46.925Z · score: 13 (5 votes)
Would solving logical counterfactuals solve anthropics? 2019-04-05T11:08:19.834Z · score: 23 (-2 votes)
Is there a difference between uncertainty over your utility function and uncertainty over outcomes? 2019-03-18T18:41:38.246Z · score: 15 (4 votes)
Deconfusing Logical Counterfactuals 2019-01-30T15:13:41.436Z · score: 26 (9 votes)
Is Agent Simulates Predictor a "fair" problem? 2019-01-24T13:18:13.745Z · score: 22 (6 votes)
Debate AI and the Decision to Release an AI 2019-01-17T14:36:53.512Z · score: 10 (4 votes)
Which approach is most promising for aligned AGI? 2019-01-08T02:19:50.278Z · score: 6 (2 votes)
On Abstract Systems 2019-01-06T23:41:52.563Z · score: 15 (9 votes)
On Disingenuity 2018-12-26T17:08:47.138Z · score: 33 (16 votes)
Best arguments against worrying about AI risk? 2018-12-23T14:57:09.905Z · score: 15 (7 votes)
What are some concrete problems about logical counterfactuals? 2018-12-16T10:20:26.618Z · score: 26 (6 votes)
An Extensive Categorisation of Infinite Paradoxes 2018-12-13T18:36:53.972Z · score: -2 (23 votes)

Comments

Comment by chris_leong on In the presence of disinformation, collective epistemology requires local modeling · 2020-04-04T11:26:20.199Z · score: 2 (1 votes) · LW · GW

"One thing that I find is often disappointingly absent from LW discussions of epistemology is how much the appropriate epistemology depends on your goals and your intellectual abilities" - Never really thought of it that way, but makes a lot of sense.

Comment by chris_leong on The absurdity of un-referenceable entities · 2020-04-04T09:40:37.254Z · score: 2 (1 votes) · LW · GW

"However if I say "there exists an un-referenceable entity, which has properties x, y, and z" then that really looks like a reference to a particular" - It's a class that may only contain one if we choose the properties correctly

Comment by chris_leong on Is the coronavirus the most important thing to be focusing on right now? · 2020-03-25T15:07:38.925Z · score: 4 (2 votes) · LW · GW

I'm curious about the 3rd argument. I'm curious about why you think it is likely that significant players will notice the contribution of Less Wrong?

Comment by chris_leong on Are HEPA filters likely to pull COVID-19 out of the air? · 2020-03-25T13:21:40.976Z · score: 2 (1 votes) · LW · GW

Is HEPA the highest standard or are there higher quality filters that could remove viruses?

Comment by chris_leong on Announcement: LessWrong Coronavirus Links Database 2.0 · 2020-03-25T10:37:20.664Z · score: 2 (1 votes) · LW · GW

Are there any plans to use this for anything else in the future?

Comment by chris_leong on Against Dog Ownership · 2020-03-23T15:42:18.135Z · score: 26 (16 votes) · LW · GW

The author makes some good points, but:

  • I think they worry too much about the submissiveness of the relationship. I think it's a much more common desire than people acknowledge. Not just in terms of sexuality, but in terms of a desire for a father figure or a great leader to tell people what to do. So it's a common desire not just for particular moments, but in how people live their whole lives
  • I don't agree with the point about relationship being invalid because you don't have to work for it. I agree that this would be bad in a romantic relationship because it'd hamper your personal development, but I really don't think that getting a cat instead of a dog will have a large effect. In fact, the safety provided by the unconditionality of a pets love may provide someone with the safety to take more risks in their relationships in the real world
  • I don't think dogs need meaning in quite the same way as humans. They acknowledge that they've personified them to an extent and try to show their argument holds anyway. However, I don't think they've entirely avoided the personification trap. I don't deny that dogs may have instincts such as hunting or herding that are unfulfilled in modern life. This are instincts and we should be concerned with them being unfulfilled, but I don't think we should equate them with a life purpose.

In any case, dogs as pets probably increase the empathy for animals significantly, so we should encourage more pets, not less.

Comment by chris_leong on [Meta] Do you want AIS Webinars? · 2020-03-21T16:20:43.039Z · score: 4 (3 votes) · LW · GW

I would be keen run a webinar on Logical Counterfactuals

Comment by chris_leong on Is the coronavirus the most important thing to be focusing on right now? · 2020-03-19T01:11:30.063Z · score: 12 (6 votes) · LW · GW

Before it was even clear it'd be this big a threat, I wrote: EA Should Wargame Coronavirus. Now I think there's an even stronger argument for it.

I think that this offers us valuable experience in dealing with one plausible source of existential risk. We don't want AI Safety people distracted from AI Safety, but at the same time I think the community will learn a lot by embarking on this project together.

Comment by chris_leong on A Sketch of Answers for Physicalists · 2020-03-15T20:49:24.479Z · score: 2 (1 votes) · LW · GW

I guess I should have been more precise. Imagine a game where we can see all the information, but some characters inside only have access to limited info.

"The outside perspective is outside but it is not observer-independent"

Sure, but it's not subject to the world-internal observer effects

Comment by chris_leong on A Sketch of Answers for Physicalists · 2020-03-15T19:37:40.942Z · score: 2 (1 votes) · LW · GW

Maybe this will help. Consider the characters in a video game. We are an external observer as we can see what is happening in the game, but they can't see us. The point isn't that we can see ourselves from the outside, but that we can imagine what it would be like to be seen from the outside.

"Our ability to imagine data about us being received by some perspective, depends on placing that perspective relative to our own" - Yes, there are limits to what we can say about the outside perspective as we can't reference it directly. We can only discuss it by analogy.

Comment by chris_leong on A Sketch of Answers for Physicalists · 2020-03-15T14:19:46.020Z · score: 2 (1 votes) · LW · GW

Maybe I'll write a post on this sometime.

"An account would have to be given of how we, as humans embedded in the universe, can speak as any kind of "external observer"" - If we construct a model that doesn't contain us, then we are an external observer of that model. We can then be analogy posit the existence of an agent that exists in that relation to us.

Re the analogy: We can't have an entity that is both internally referenceable and internally unreferenceable. However we can have an external reference to an unreferenceable entity. Okay, maybe the analogy wasn't quite as direct as I was thinking.

Comment by chris_leong on A Sketch of Answers for Physicalists · 2020-03-15T14:10:15.737Z · score: 2 (1 votes) · LW · GW

I'd define it as the argument that nothing non-material exists (except possibility logic)

Comment by chris_leong on A Sketch of Answers for Physicalists · 2020-03-14T20:14:47.580Z · score: 2 (1 votes) · LW · GW

I can follow your argument, but could you clarify what you mean by Sense and Reference?

Comment by chris_leong on A Sketch of Answers for Physicalists · 2020-03-14T20:11:19.657Z · score: 2 (1 votes) · LW · GW
Un-referenceable objective reality goes rather beyond un-knowable objective reality. The second doesn't collapse into absurdity, while the first does (note that "un-referenceable objective reality" is a reference!).

We can construct a model were we (the external observers) can reference things that not observer in the model (the internal observers) can reference. Here's an analogy - we can't prove an unprovable theorem, but we might be able to prove a theorem unprovable.

Incompatible with the sort of physicalism that thinks it isn't meaningful to talk about "seeing" (e.g. consciousness) independent of a physical definition.

I'm not familiar with that strain of thought, but I can posit why some people might find that compelling

"Define an interpretation scheme" is incredibly vague

Yeah, as I said, this is just a sketch. There's a lot more that would need to be said it order to actually do this

Comment by chris_leong on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T01:05:17.220Z · score: 4 (2 votes) · LW · GW

Thanks, I thought this was useful, especially dividing it into 3 categories instead of two

Comment by chris_leong on Analyticity Depends On Definitions · 2020-03-11T14:50:08.520Z · score: 2 (1 votes) · LW · GW

Interesting thought. I wouldn't go so far as to say it's only definable within a formal system. But without formal definitions, it's going to be kind of fuzzy/dependent on exact interpretation.

Comment by chris_leong on A conversation on theory of mind, subjectivity, and objectivity · 2020-03-11T00:34:57.280Z · score: 2 (1 votes) · LW · GW
Physics can't say what an epistemic component is

Insofar as the epistemic component consists of logic, physics can't say what that logic is ontologically. On the other hand, it can describe how brain states are linked to physical states, which should be sufficient to explain materialistic-observations.

So as a justification for physics, it's circular

Circularity is inevitable (I like the arguments in Where Recursive Justification Hits Bottom), so this isn't as problematic as it seems.

That said, I agree that starting with subjective experience as our initial foundations is in one sense more empirical than starting with the external world as we can derive the external world's existence from patterns in subjective experience.

Comment by chris_leong on A conversation on theory of mind, subjectivity, and objectivity · 2020-03-10T23:04:49.139Z · score: 2 (1 votes) · LW · GW
If someone said "actually, there's no such thing as (c), there's just (a) and (b)", then that's going to be hard to argue for, epistemically/normatively, since there is a denial of the existence of epistemology.

Physics can explain the epistemic component in your brain - it just can't explain the experience of believing or cognition in general.

I am not really importantly distinguishing qualia-observations from "the data my cognitive process is trying to explain" here. It seems like even an account that somehow doesn't believe in qualia still needs to have data that it explains, hence running into similar issues.

The data to be explained are the experiences - say of seeing red or feeling pain. If you take that data to be the red brain process, that can be explained purely materialistically. The red brain process only needs a materialistic observer - ie. some kind of central processing unit - what's wrong with this? It's only qualia that needs the observer to have a non-materialistic component.

Comment by chris_leong on A conversation on theory of mind, subjectivity, and objectivity · 2020-03-10T22:19:03.781Z · score: 6 (3 votes) · LW · GW

Happy to see someone else defending non-materialism since I see it as underrated. Some thoughts:

Nah, it can't account for what an "observation" is so can't really explain observations

This is really the heart of the issue. Is an observation qualia or some purely material process in our brain?

I should adopt the explanation that best explains my observation

Seems like a distraction? If the observations are materialist than materialism can explain the materialist-myness; if they are qualatic that we need qualia to define the qualiatic-myness. Merely knowing qualia have a property of my-ness doesn't tell us which type. And it would seem unusual to say, know that myness is qualiatic without first knowing observations are qualiatic, since we can't directly experience our myness, only our observations.

It has to do some ontological reshuffling around what "observations" are that, I think, undermines the case for believing in physics in the first place, which is that it explains my observations

Why does it undermine physics?

I think it makes more sense to think of mental things as existing subjectively (i.e. if they belong to you) and physical things as existing objectively. I definitely think that dualism is making a mistake in thinking of objectively-existing mental things

This relates quite closely to my post on Relabellings vs. External References. If mental things are just a relabelling of materialism, they don't actually add anything be being present in the model. In order to actually change the system, they need to refer to external entities, in which case mental things aren't really subjective any more.

Comment by chris_leong on How effective are tulpas? · 2020-03-10T15:24:27.471Z · score: 2 (1 votes) · LW · GW

Why'd you pick Kermit the frog?

Comment by chris_leong on Embedded vs. External Decision Problems · 2020-03-08T03:48:08.782Z · score: 2 (1 votes) · LW · GW

That's true also

Comment by chris_leong on Embedded vs. External Decision Problems · 2020-03-05T18:50:44.153Z · score: 2 (1 votes) · LW · GW

The agent has code. It can only do what the code says. If the code will make it one box, there was a sense in which it never could have two-boxed.

Comment by chris_leong on Stuck Exploration · 2020-02-20T22:08:09.876Z · score: 2 (1 votes) · LW · GW

In the iterated version we want the coin to sometimes be heads, sometimes tails. Sorry, I'm confused, I have no idea why you want to transform the problem like that?

Comment by chris_leong on Stuck Exploration · 2020-02-20T08:52:49.119Z · score: 2 (1 votes) · LW · GW

Yeah, need to replace dying with losing a lot of utility. I've updated the post.

But coin needs to depend on your prediction instead of being always biased a particular way.

Comment by chris_leong on A Memetic Mediator Manifesto · 2020-02-18T00:07:35.393Z · score: 2 (1 votes) · LW · GW

Seems like it is still a new idea

Comment by chris_leong on A Memetic Mediator Manifesto · 2020-02-17T21:59:46.791Z · score: 2 (1 votes) · LW · GW

I'm fine with that

Comment by chris_leong on Reference Post: Trivial Decision Problem · 2020-02-17T19:29:00.328Z · score: 2 (1 votes) · LW · GW

"I think the next place to go is to put this in the context of methods of choosing decision theories - the big ones being reflective modification and evolutionary/population level change. Pretty generally it seems like the trivial perspective is unstable is under these, but there are some circumstances where it's not." - sorry, I'm not following what you're saying here

Comment by chris_leong on Chris_Leong's Shortform · 2020-02-07T02:40:09.884Z · score: 5 (3 votes) · LW · GW

Thanks for clarifying

Comment by chris_leong on Chris_Leong's Shortform · 2020-02-06T12:45:22.431Z · score: 3 (2 votes) · LW · GW

"It's a contradiction to have a provable statement that is unprovable" - I meant it's a contradiction for a statement to be both provable and unprovable.

"It's not a contradiction for it to be provable that a statement is unprovable" - this isn't a contradiction

Comment by chris_leong on Chris_Leong's Shortform · 2020-02-06T03:27:31.552Z · score: 2 (1 votes) · LW · GW

Interesting point, but you're using duty differently than me. I'm talking about their duties towards you. Of course, we could have divided it another way or added extra levels.

Comment by chris_leong on Eukryt Wrts Blg · 2020-02-03T19:27:39.926Z · score: 3 (2 votes) · LW · GW

One problem is that completely avoiding jargon limits your ability to build up to more complex ideas

Comment by chris_leong on Chris_Leong's Shortform · 2020-02-02T21:56:11.673Z · score: 4 (2 votes) · LW · GW

Three levels of forgiveness - emotions, drives and obligations. The emotional level consists of your instinctual anger, rage, disappointment, betrayal, confusion or fear. This is about raw raws. The drives consists of your "need" for them to say sorry, make amends, regret their actions, have a conversation or emphasise with you. In other words, it's about needing the situation to turn out a particular way. The obligations are very similar to the drives, except it is about their duty to perform these actions rather than your desire to make it happen.

Someone can forgive all of these levels. Suppose someone says that they are sorry and the other person "there is nothing to forgive". Then perhaps they mean that there was no harm or that they have completely forgiven all levels.

Alternatively, someone might forgive on one-level, but not another. For example, it seems that most of the harm of holding onto a grudge comes from the emotional level and the drives level, but less from the duties level.

Comment by chris_leong on Have epistemic conditions always been this bad? · 2020-01-25T14:35:52.360Z · score: 29 (16 votes) · LW · GW

I believe that the main reason why this hasn't been discussed in any depth on Less Wrong is a) the norm set up by Elizier Yudkowsky in Politics is the Mindkiller b) The Motte has become the default rationalsphere adjacent location for this kind of discussion. That said, it's plausible that the situation has reached the point where this topic can no longer be avoided.

Comment by chris_leong on Chris_Leong's Shortform · 2020-01-20T16:54:59.937Z · score: 7 (4 votes) · LW · GW

There appears to be something of a Sensemaking community developing on the internet, which could roughly be described as a spirituality-inspired attempt at epistemology. This includes Rebel Wisdom, Future Thinkers, Emerge and maybe you could even count post-rationality. While there are undoubtedly lots of critiques that could be made of their epistemics, I'd suggest watching this space as I think some interesting ideas will emerge out of it.

Comment by chris_leong on Reality-Revealing and Reality-Masking Puzzles · 2020-01-18T00:40:54.533Z · score: 5 (3 votes) · LW · GW

You are talking about it as though it is a property of the puzzle, when it seems likely to be an interaction between the person and puzzle

Comment by chris_leong on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T23:37:26.817Z · score: 6 (4 votes) · LW · GW

Interesting post, although I wish "reality-masking" puzzles had been defined better. Most of this post is around disorientation pattern or disabling parts of the epistemic immune system more than anything directly masking reality.

Also related: Pseudo-rationality

Comment by chris_leong on Is backwards causation necessarily absurd? · 2020-01-15T00:55:47.974Z · score: 2 (1 votes) · LW · GW

"That said, I don't think we are really explaining or de-confusing anything if we appeal to backwards causation to understand Newcomb's Problem or argue for a particular solution to it." - How come?

Comment by chris_leong on Is backwards causation necessarily absurd? · 2020-01-15T00:53:34.950Z · score: 2 (1 votes) · LW · GW

"Relativity does not make the arrow of time relative to observer" - I didn't say that. I said there was no unified notion of the present

Comment by chris_leong on Why a New Rationalization Sequence? · 2020-01-14T17:46:32.548Z · score: 2 (1 votes) · LW · GW

Maybe you can't dream the actual process of factoring a large number, but you can dream of having just finished completing such a factoring with the result having come out correct

Comment by chris_leong on Dissolving Confusion around Functional Decision Theory · 2020-01-06T20:46:53.977Z · score: 3 (2 votes) · LW · GW

I liked the diagrams as I think they'll be clarifying to most people. However, in response to:

I think that many proponents of FDT fail to make this point: FDT’s advantage is that it shifts the question to what type of agent you want to be--not misleading questions of what types of “choices” you want to make

FDT involves choosing in observation-action mapping which is effectively the same as choosing an algorithm if the reward doesn't ever depend on why you make a particular decision and the mapping space is finite

One problem with trying to model it as what algorithm to run is that you are running code to select the algorithm so if the actual algorithm ever matters as opposed to just the observation-action map, you'd need to take the selection algorithm into account.



Comment by chris_leong on The Universe Doesn't Have to Play Nice · 2020-01-06T20:33:53.958Z · score: 2 (1 votes) · LW · GW

The point about simulations was merely to show that the idea of a universe with the majority of consciousness being Boltzmann Brains isn't absurd

Comment by chris_leong on The Universe Doesn't Have to Play Nice · 2020-01-06T11:20:12.342Z · score: 5 (3 votes) · LW · GW

I'm sure I'll link back to this post soon. But this post is motivated by a few things such as:

a) Disagreements over consciousness - if non-materialist qualia existed, then we wouldn't be able to know about them empirically, but the universe doesn't have to play nice and make all phenomenon accessible to our scientific instruments, so we should have more uncertainty about this than people generally possess

b) The theories that can explain everything post - as nice as it'd be to just be able to evaluate theories empirically, there's no reason why we can't have a theory that is important for determining expectations, which isn't cleanly falsifiable

Comment by chris_leong on Normalization of Deviance · 2020-01-04T01:35:45.751Z · score: 4 (2 votes) · LW · GW

I thought it might be useful to give an example of when normalisation of deviance is functional. Let's suppose that a hospital has to treat patients, but because of short-staffing there would be no way of filling out all of the paperwork properly whilst treating all the patients, so the doctors don't fill out all of the fields.

It's also important to mention the possibility of scapegoating - perhaps the deviance is justified and practically everyone is working in that manner, but if something goes wrong you may be blamed anyway. So it's very important to take this small chance of an extremely harsh punishment into account.

Comment by chris_leong on Theories That Can Explain Everything · 2020-01-03T23:54:27.839Z · score: 2 (1 votes) · LW · GW

Interesting idea. What is the use of organising beliefs without updating them?

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-03T15:01:05.525Z · score: 2 (1 votes) · LW · GW
My argument is that we ARE thinking ahead about counterfactual mugging right now, in considering the question

When we think about counterfactual muggings, we naturally imagine the possibility of facing a counterfactual mugging in the future. I don't dispute the value of pre-committing either to take a specific action or to acting updatelessly. However, instead of imagining a future mugging, we could also imagine a present mugging where we didn't have time to make any pre-commitments. I don't think it is immediately obvious that we should think updatelessly, instead I believe that it requires further justification.

The role of thinking about decision theory now is to help guide the actions of my future self

This is effectively an attempt at proof-by-definition

I think the average person is going to be thinking about things like duty, honor, and consistency which can serve some of the purpose of updatelessness. But sure, updateful reasoning is a natural kind of starting point, particularly coming from a background of modern economics or bayesian decision theory

If someone's default is already updateless reasoning, then there's no need for us to talk them into it. It's only people with an updateful default that we need to convince (until recently I had an updateful default).

And when we think about problems like counterfactual mugging, the description of the problem requires that there's both the possibility of heads and tails

It requires a counterfactual possibility, not an actual possibility. And a counterfactual possibility isn't actual, it's counter to the factual. So it's not clear this has any relevance.

It looks like to me that you're tripping yourself up with verbal arguments that aren't at all obviously true. The reason why I believe that the Counterfactual Prisoner's Dilemma is important is because it is a mathematical result that doesn't require much in the way of assumptions. Sure, it still has to be interpreted, but it seems hard to find an interpretations that avoids the conclusion that the updateful perspective doesn't quite succeed on its own terms.

Comment by chris_leong on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T00:52:44.719Z · score: 15 (6 votes) · LW · GW

Have you considered cross-posting this to the EA forum?

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-02T12:52:53.293Z · score: 2 (1 votes) · LW · GW

You feel that I'm begging the question. I guess I take only thinking about this counterfactual as the default position, as where an average person is likely to be starting from. And I was trying to see if I could find an argument strong enough to displace this. So I'll freely admit I haven't provided a first-principles argument for focusing just on this counterfactual.

OK, but I don't see how that addresses my argument.

Your argument is that we need to look at iterated situations to understand learning. Sure, but that doesn't mean that we have to interpret every problem in iterated form. If we need to understand learning better, we can look at a few iterated problems beforehand, rather than turning this one into an iterated problems.

The average includes worlds that you know you are not in. So this doesn't help us justify taking these counterfactuals into account,

Let me explain more clearly why this is a circular argument:

a) You want to show that we should take counterfactuals into account when making decisions

b) You argue that this way of making decisions does better on average

c) The average includes the very counterfactuals whose value is in question. So b depends on a already being proven => circular argument

Comment by chris_leong on Theories That Can Explain Everything · 2020-01-02T12:24:59.428Z · score: 2 (1 votes) · LW · GW

Saying anything is possible is a prediction, but a trivial prediction. Nonetheless, it changes expectations if before only A seemed possible.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-02T00:24:16.567Z · score: 2 (1 votes) · LW · GW
The argument is that we have to understand learning in the first place to be able to make these arguments, and iterated situations are the easiest setting to do that in

Iterated situations are indeed useful for understanding learning. But I'm trying to abstract out over the learning insofar as I can. I care that you get the information required for the problem, but not so much how you get it.

Especially when we can see that the latter way of reasoning does better on average?

The average includes worlds that you know you are not in. So this doesn't help us justify taking these counterfactuals into account, indeed for us to care about the average we need to already have an independent reason to care about these counterfactuals.

I kind of feel like you're just repeatedly denying this line of reasoning. Yes, the situation in front of you is that you're in the risk-hand world rather than the risk-life world. But this is just question-begging with respect to updateful reasoning.

I'm not saying you should reason in this way. You should reason updatelessly. But in order to get to the point of finding the Counterfactual Prisoner's Dilemma, while I consider a satisfactory justification, I had rigorously question every other solution until I found one which could withstand the questioning. This seems like a better solution as it is less dependent on tricky to evaluate philosophical claims.

Ah, that's kind of the first reply from you that's surprised me in a bit

Well, thinking about a decision after you make it won't do you much good. So you're pretty always thinking about decisions before you make them. But timelessness involves thinking about decision before you end up facing them.

Comment by chris_leong on Counterfactual Mugging: Why should you pay? · 2020-01-01T14:30:43.719Z · score: 1 (2 votes) · LW · GW
If an agent is really in a pure one-shot case, that agent can do anything at all

You can learn about a situation other than by facing that exact situation yourself. For example, you may observe other agents facing that situation or receive testimony from an agent that has proven itself trustworthy. You don't even seem to disagree with me here as you wrote: "you can learn enough about the universe to be confident you're now in a counterfactual mugging without ever having faced one before"

"This goes along with the idea that it's unreasonable to consider agents as if they emerge spontaneously from a vacuum, face a single decision problem, and then disappear" - I agree with this. I asked this question because I didn't have a good model of how to conceptualise decision theory problems, although I think I have a clearer idea now that we've got the Counterfactual Prisoner's Dilemma.

One way of appealing to human moral intuition

Doesn't work on counter-factually selfish agents

Decision theory should be reflectively endorsed decision theory. That's what decision theory basically is: thinking we do ahead of time which is supposed to help us make decisions

Thinking about decisions before you make them != thinking about decisions timelessly