The Short Case for Verificationism 2020-09-11T18:48:00.372Z · score: 4 (2 votes)
This Territory Does Not Exist 2020-08-13T00:30:25.700Z · score: 11 (8 votes)
ike's Shortform 2019-09-01T18:48:35.461Z · score: 5 (1 votes)
Attacking machine learning with adversarial examples 2017-02-17T00:28:09.908Z · score: 3 (4 votes)
Gates 2017 Annual letter 2017-02-15T02:39:12.352Z · score: 4 (5 votes)
Raymond Smullyan has died 2017-02-12T14:20:57.626Z · score: 3 (4 votes)
A Few Billionaires Are Turning Medical Philanthropy on Its Head 2016-12-04T15:08:22.933Z · score: 0 (1 votes)
Newcomb versus dust specks 2016-05-12T03:02:29.720Z · score: -1 (6 votes)
The guardian article on longevity research [link] 2015-01-11T19:02:52.830Z · score: 8 (9 votes)
Discussion of AI control over at worldbuilding.stackexchange [LINK] 2014-12-14T02:59:47.239Z · score: 6 (7 votes)
Rodney Brooks talks about Evil AI and mentions MIRI [LINK] 2014-11-12T04:50:23.828Z · score: 3 (6 votes)


Comment by ike on This Territory Does Not Exist · 2020-09-23T15:27:26.623Z · score: 1 (1 votes) · LW · GW

I don't believe I've done that. 

Comment by ike on This Territory Does Not Exist · 2020-09-22T15:56:03.270Z · score: 1 (1 votes) · LW · GW

Yes, in many ways, with extended arguments. What exactly is your issue?

Comment by ike on This Territory Does Not Exist · 2020-09-22T15:55:00.910Z · score: 1 (1 votes) · LW · GW

No, I never took 1 or 2 as a premise. Read it again.

Comment by ike on This Territory Does Not Exist · 2020-09-21T19:57:33.840Z · score: 1 (1 votes) · LW · GW

Why? Because your intuition doesn't tell you that an undecidable statement is meaningless unless it is ontological?

No, because the specific arguments only work for ontological statements. E.g. the multiverse argument only works for the subset of ontological claims that are true in only some worlds.

Comment by ike on The Short Case for Verificationism · 2020-09-21T19:55:47.943Z · score: 1 (1 votes) · LW · GW

Not following. Can you state your point plainly? Which part of my argument do you reject?

Comment by ike on This Territory Does Not Exist · 2020-09-18T19:33:18.177Z · score: 1 (1 votes) · LW · GW

> said that you made a claim based on nothing but intuition

This isn't true - I've made numerous arguments for this claim not purely based on intuition.

>The argument that if it has no observable consequences, it is meaningless does not apply to only ontological statements.

I did not make this argument. This is a conclusion that's argued for, not an argument, and the arguments for this conclusion only apply to ontological statements.

Comment by ike on The Short Case for Verificationism · 2020-09-18T15:00:44.003Z · score: 1 (1 votes) · LW · GW

If you take that as a premise and you consider it contradictory to my conclusion and you accept my premises, then the premises you accept imply a contradiction. That's your problem, not mine.

Comment by ike on This Territory Does Not Exist · 2020-09-18T14:58:58.142Z · score: 1 (1 votes) · LW · GW

You certainly started by making a direct appeal to your own intuition. Such an argument can be refuted by intuiting differently.

I've made a number of different arguments. You can respond by taking ontological terms as primitive, but as I've argued there's strong reasons for rejecting that.

You don't have any systematic argument to that effect

Of course I do. Every one of the arguments I've put forward clearly applies only to the kinds of ontological statements I'm talking about. If an argument I believed was broader, then I'd believe a broader class of statements was meaningless. If you disagree, which specific argument of mine (not conclusion) doesn't?

I'm not interested in analytical definitions right now. That's how Quine argued against it and I don't care about that construction.

Comment by ike on This Territory Does Not Exist · 2020-09-18T02:02:08.286Z · score: 1 (1 votes) · LW · GW

>I didn't mention a specific self-defeater as that's been discussed in the comments above.

I've responded to each comment, Which argument do you think has not been sufficiently responded to?

>1. Denying the existence of a deeper, unobservable reality or saying that speaking about it is nonsense is also useless for any kind of prediction

You're the one saying we should treat this notion as primitive. I'm not arguing for taking verificationism as an axiom, but as a considered and argued for conclusion. You obviously need far stronger arguments for your axioms/primitives than your argued conclusions.

>2. The Universe Doesn't Have to Play Nice

It seems like there's a great deal of agreement in that post. You concede that there's no way to obtain evidence against Boltzmann or any knowledge about realism, agreeing with my 1 and 2 above (which are controversial in philosophy.) I don't see what part of that post has an objection to the kind of reasoning here. I'm not saying the universe must play nice, I'm saying it's odd to assert a primitive under these conditions.

>Just because it is convenient to use exists in a way that refers to a particular scope of a multiverse, doesn't prevent us as treating the whole multiverse as just a rather unusual universal and using the term exists normally.

This would be consistent with my argument here, which is about claims that are true in some parts of the multiverse and false in others. If you retreat to the viewpoint that only statements about the multiverse can use the term exist, then you should still agree that statements like "chairs exist in our world" are meaningless.

>aren't claims about a multiverse inconsistent with your strong verificationism? 

I think I live in a level IV multiverse, and the sense I mean this in is that my probability expectations are drawn from that multiverse conditioned on my current experience. It's entirely a statement about my expectations. (I also have a small probability mass on there being branches outside level IV with uncomputable universes, such as ones containing halting oracles.) I think this is meaningful but "the level IV multiverse actually exists" is not.

Comment by ike on This Territory Does Not Exist · 2020-09-18T00:32:29.725Z · score: 1 (1 votes) · LW · GW

I do have the disclaimer in the middle of OP, but not upfront, to be fair.

>the verification principle does feel like an ontological claim as it is claiming that certain things don't exist or at least that talking about them is meaningless

These are very different things.

>how are you defining ontological

Claims of the sort "X exists" or synonyms like "X is real", when intended in a deeper sense than more colloquial usage (e.g. "my love for you is real" is not asserting an ontological claim, it's just expressing an emotion, "the stuff you see in the movies isn't real" is also not an ontological usage, "the tree in the forest exists even when nobody's looking at it" is an ontological claim, as well as "the past really happened". )

(Note that I view "the tree in the forest exists when people are looking at it" as just as meaningless - all there is is the experience of viewing a tree. Our models contain trees, but that's a claim about the territory, not the model.)

Comment by ike on The Short Case for Verificationism · 2020-09-17T22:04:29.653Z · score: 1 (1 votes) · LW · GW

That's not a counterargument, as it's fully consistent with the conclusion.

Comment by ike on This Territory Does Not Exist · 2020-09-17T22:01:10.041Z · score: 1 (1 votes) · LW · GW

My problem with ontological statements is they don't appear to be meaningful.

Don't confuse the historical verification principle with the reasons for believing it. Those reasons apply to ontological statements and not to other statements.

Comment by ike on This Territory Does Not Exist · 2020-09-17T22:00:28.960Z · score: 1 (1 votes) · LW · GW

My problem with ontological statements is they don't appear to be meaningful.

Don't confuse the historical verification principle with the reasons for believing it. Those reasons apply to ontological statements and not to other statements.

Comment by ike on The Short Case for Verificationism · 2020-09-17T18:52:41.512Z · score: 1 (1 votes) · LW · GW

The argument does not depend on probability.

If you disagree with the conclusion, please explain which premise is wrong, or explain how the conclusion can be false despite all premises holding.

Comment by ike on This Territory Does Not Exist · 2020-09-17T18:50:52.969Z · score: 1 (1 votes) · LW · GW

I only apply my principle to ontological statements, as explained in OP. And ontological statements never constrain expectations. So they are equivalent under these conditions.

Comment by ike on This Territory Does Not Exist · 2020-09-17T15:51:54.382Z · score: 1 (1 votes) · LW · GW

My version of the verification principle is explicitly about ontological claims. And that principle is not itself an ontological claim.

I don't really like the way the verification principle is phrased. I'm referencing it because I don't want to coin a new name for what is definitely a verificationist worldview with minor caveats.

Comment by ike on This Territory Does Not Exist · 2020-09-17T15:48:59.488Z · score: 1 (1 votes) · LW · GW

It's only self-defeating if you aren't careful about definitions. Which admittedly, I haven't been here. I'm writing a blog post exploring a topic, not an academic paper. I'd be glad to expand a bit more if you would point to a specific self-defeater.

>Here's another weird effect - let's suppose I roll a dice and see that it is a 6. I then erase the information from my brain, which then takes us to a position where the statement is impossible to verify. Does the statement then become meaningless?

Yes, it's irrelevant to any future predictions.

Re exist being primitive: it's a weird primitive that

1. Is entirely useless for any kind of prediction

2. Is entirely impossible to obtain evidence about even in theory

3. Appears to break down given plausible assumptions about the mere possibility of a multiverse

If anything, I think "exist is a primitive" is self-defeating, given 3. And it's useless for purposes people tend to use it for, given 1 and 2.

Do you think there's a "fact of the matter" as to which branch of the level III multiverse we're in? What about levels I, II, and IV?

Comment by ike on The Short Case for Verificationism · 2020-09-15T13:19:14.243Z · score: 1 (1 votes) · LW · GW

The argument is laid out in OP.

Comment by ike on The Short Case for Verificationism · 2020-09-14T21:41:30.078Z · score: 1 (1 votes) · LW · GW

To the extent Occam is interpreted as saying that more complicated theories are impossible, as opposed to unlikely, it's not plausible.

As above, my claim rests only on the possibility of a multiverse.

Comment by ike on The Short Case for Verificationism · 2020-09-12T21:09:10.548Z · score: 1 (1 votes) · LW · GW

I don't know what a plausible version of the argument you're hinting at would look like. If you think there's such a plausible argument, please point me at it.

Comment by ike on The Short Case for Verificationism · 2020-09-12T19:57:54.021Z · score: 1 (1 votes) · LW · GW

Anti realism is quite contentious.

That doesn't mean that Occam grounding realism is at all plausible. I've laid out an argument for verificationism here and met my burden of proof. Suggesting that there might possibly be a counterargument isn't meeting the opposing burden of proof.

Comment by ike on The Short Case for Verificationism · 2020-09-12T19:33:59.898Z · score: 1 (1 votes) · LW · GW

>But you don't have a proof that that is the only legitimate use of Occam. If realists can use Occam to rescue realism, then realism gets rescued.

Surely the burden of proof is on someone suggesting that Occam somehow rescues realism.

Besides, level IV is arguably simpler than almost any alternative, including a singleton universe.

Comment by ike on The Short Case for Verificationism · 2020-09-12T18:18:26.347Z · score: 2 (2 votes) · LW · GW

I accept Occam, but for me it's just a way of setting priors in a model using to make predictions.

And part of my argument here is how the mere possibility of large universes destroys the coherency of realism. Even those rejecting simulations would still say it's possible.

(Putnam would say it's meaningless, and I would in fact agree but for different reasons.)

Comment by ike on The Short Case for Verificationism · 2020-09-12T18:08:02.482Z · score: 1 (1 votes) · LW · GW

I added two lemmas to clarify. I guess you could quibble with lemma 2, I think it does follow if we assume that we know or at least can know premise 3, but that seems plausible if you're willing to accept it as a premise at all.

Comment by ike on The Short Case for Verificationism · 2020-09-12T18:01:05.340Z · score: 1 (1 votes) · LW · GW

I could break it up into more steps if it's not entirely clear that the premises imply the conclusion.

Comment by ike on The Short Case for Verificationism · 2020-09-12T15:48:43.658Z · score: 1 (1 votes) · LW · GW

My premises imply the conclusion. You might not like some of the premises, perhaps.

Comment by ike on The Short Case for Verificationism · 2020-09-12T15:29:34.629Z · score: 1 (1 votes) · LW · GW

Firstly, its not clear whether a level IV multiverse means there are multiple copies of you with identical brain states but living in different environments -- why would you be in a "seeing a tree" brain state if you are on a planet with no trees?

It's very clear, to the contrary. A universe with an identical brain state but without any trees absolutely is part of it, there is a mathematical structure corresponding to the brain state that doesn't also contain trees.

what then makes them identical

My standard criteria is subjective indistinguishability. Any universe that we can't tell that we're not in contains a copy of us.

Re simulation: level IV contains a broad range of theories even weirder than simulations of us, infinitely many of which make ontological claims come out alternatively true or false. We certainly can't prove any ontological claim.

Secondly, even if ontological questions can't be answered , that doesn't mean they are meaningless, under the one and only definition of "meaning" anyone ever had.

We'd at least have to significantly complicate the definition. I think people generally intend and expect ontological claims to have a coherent truth value, and if that's not the case then there's no referent for what people use ontological claims for. If you don't want to call it meaningless, fine, but it's certainly weird.

Comment by ike on The Short Case for Verificationism · 2020-09-12T15:19:55.352Z · score: 1 (1 votes) · LW · GW

No, it would be circular if I defined meaning to already include the verificationist claim.

Rather, I define meaning in other terms and then argue that this implies the verificationist claim.

Comment by ike on The Short Case for Verificationism · 2020-09-11T23:54:47.777Z · score: 1 (1 votes) · LW · GW

> To most I think, something is meaningful if it somehow is grounded in external reality

This is of course circular when trying to justify the meaningfulness of this "external reality" concept.

>we can't assess the truthiness of

This is one way to state verificationism, but I'm not *assuming* this, I'm arguing for it.

>Just for example, you perhaps can't speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.

It might be meaningless to me, and meaningful for them.

>most of the interesting stuff we need to deal with is now outside the realm of facts and "meaning" by your model.

What interesting stuff do we need to deal with that don't affect our decisions or probability distributions over our experiences? Since my model doesn't affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I'm not necessarily standing by that, I prefer to let utility function range broadly.)

Comment by ike on The Short Case for Verificationism · 2020-09-11T22:34:37.059Z · score: 1 (1 votes) · LW · GW

For this argument, yes, but it might not be sufficient when extending it beyond just uncertainty in verificationism.

Comment by ike on The Short Case for Verificationism · 2020-09-11T22:24:21.854Z · score: 1 (1 votes) · LW · GW

So your claim that it's not meaningless is basically just the negation of my third premise.

Comment by ike on The Short Case for Verificationism · 2020-09-11T22:10:14.875Z · score: 1 (1 votes) · LW · GW

I'm not sure what you mean by cosmological multiverse.

Re distinction between branching and diverging - I think even without adopting verificationism, one can plausibly argue that that distinction is meaningless.

Comment by ike on The Short Case for Verificationism · 2020-09-11T22:08:43.200Z · score: 2 (2 votes) · LW · GW

I think a claim is meaningful if it's possible to be true and possible to be false. Of course this puts a lot of work on "possible".

Comment by ike on The Short Case for Verificationism · 2020-09-11T21:05:18.890Z · score: 1 (1 votes) · LW · GW

At the very least, it should make verificationism more plausible to people who consider the level IV multiverse plausible.

I think the argument might go through in a weaker form with lower levels. But I suspect many people are already verificationist in that weaker form, to some extent. E.g. if you subscribe to "the electron is not in state A or state B until we measure it", then you're committed to a mild form of verificationism corresponding to the level III multiverse.

Compare to my argument - the ontological statements it applies to are just those statements that are both true and false on different parts of the multiverse containing us. I think this directly corresponds, in the case of the quantum multiverse, to what many people would consider things that lack a fact of the matter either way.

Comment by ike on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-08T17:19:38.673Z · score: 1 (1 votes) · LW · GW

>Does every single observation we make confirms the MWI?

>Does my simple existence confirm the existence of the multiverse?

These are equivalent to presumptuous philosopher, and my answer is the same - if my existence is more likely given MWI or multiverse, then it provides Bayesian evidence. This may or may not be enough to be confident in either, depending on the priors, which depends on the simplicity of the theory compared to the simplicity of competing theories.

>Wouldn't my existence alone confirm there are numerous ancestor simulations?

No, there's a lot of implausible assumptions involved there. I don't think the measure of ancestor simulations across the multiverse is significant. If we had strong evidence that the measure was high, then that probability would go up, all else being equal.

The great filter itself relies on assumptions about base rates of life arising and thriving which are very uncertain. The post you link to says:

>Assume the various places we think the filter could be are equally likely.

Also, we should be conditioning on all the facts we know, which includes the nature of our Earth, the fact we don't see any signs of life on other planets, etc. It's not at all clear that a future filter is more likely once all that is taken into account.

>In Dr. Evil and Dub, what conclusion would SIA make?

The conclusion would be that they're equally likely to be the clone vs the original. Whether they should act on this conclusion depends on blackmail game theory.

> Is the brain arm race correct?

I don't know which paradox you're referring to.

I don't think any of these paradoxes are resolved by ad hoc thinking. One needs to carefully consider the actual evidence, the Bayes factor of conditioning on one's existence, the prior probabilities, and use SIA to put it all together. The fact that this sometimes results in unintuitive results shouldn't be held against the theory, all anthropic theories will be unintuitive somewhere.

Comment by ike on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-08T16:46:44.827Z · score: 1 (1 votes) · LW · GW

>However, what you are saying is essentially the MWI.

No it isn't - MWI has to do with quantum effects and that scenario doesn't involve any. You can't argue against MWI on the basis of contingent quantum facts (which you effectively did when pointing to a "major ongoing debate" about MWI probability - that debate is contingent on quantum idiosyncrasies) , and then say those arguments apply to any multiverse.

>It is saying the choice of reference class, under some conditions, would not change the numerical value of probability. Because the effect of the choice cancels out in problems such as sleeping beauty and doomsday argument. Not that there is no reference class to begin with.

If the choice of reference class doesn't matter, then you don't need a reference class. You can formulate it to not require a reference class, as I did. Certainly there's no problem of arbitrary picking of reference classes if it literally doesn't make a difference to the answer.

The only restriction is that it must contain all subjectively indistinguishable observers - which is equivalent to saying it's possible for "me" to be any observer that I don't "know" that I'm not - which is almost tautological (don't assume something that you don't know). It isn't arbitrary to accept a tautological restriction here.

I'll respond to the paradoxes separately.

Comment by ike on Reference Classes · 2020-09-07T23:27:36.920Z · score: 1 (1 votes) · LW · GW

I think that the only rational reason to treat your own experience as more significant, given the constraints of the problem, is the anthropic nature of the evidence. And that explains nicely why others shouldn't update as much.

If you think there's an example that isolates non-anthropic effects that behave similarly, perhaps. I'll reserve judgement until I see such an example - for now, I don't know of any.

Comment by ike on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-07T22:26:08.001Z · score: 1 (1 votes) · LW · GW
As for the difference between Questions 2 and 3, the fundamental difference is the probability cannot be formulated from a single perspective for 3.

I still see no relevant difference between 2 and 3. For one, you're assuming a random choice can be made, but aren't explaining how. Maybe that random choice results in two universes, one of which has the original get assigned as RED and the other has the clone assigned as RED.

I don't think that probabilities are impossible, just because there are multiple copies of me. I don't think you've addressed the issue at all. You're pointing at supposed paradoxes without naming a single one that rules out such probabilities.

And keep in mind, the nature of probability in MWI is a major ongoing debate.

None of the scenarios we've been discussing involve MWI - that's more complicated because the equations are tricky and don't easily lend themselves to simple conceptions of multiple worlds.

The fact that the probability comes from a complete known experiment with a deterministic outcome is not easily justifiable.

Bayesianism does not require determinism to generate probabilities. Technically, it's just a way of turning observations about the past into predictions about the future.

I do not think all anthropic paradoxes are settled by SIA.

Can you name one that isn't?

And I am pretty sure there will be supporters of SIA who's unhappy with your definition of the reference class (or lack thereof).

I don't know why I should care, even if that were the case. I have a firm grounding of Bayesianism that leads directly to SIA. Reference classes aren't needed:

Notice that unlike SSA, SIA is not dependent on the choice of reference class, as long as the reference class is large enough to contain all subjectively indistinguishable observers.

I think your assertion that SIA requires reference classes just means you aren't sufficiently familiar with it. As far as I can tell, your only argument against self locating probability had to do with the issue of reference classes.

Comment by ike on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-07T20:16:21.368Z · score: 1 (1 votes) · LW · GW

Expectations are subjective and Bayesian.

>The answer (e.g. "original" vs "clone") could be on the back of a piece of paper laying right in front of me.

I don't understand why you think question 2 is meaningful, but question 3 is not, in that case. If it's meaningful to ask what Omega labelled you, why isn't it meaningful to ask what Omega wrote on the paper in front of you?

>there is no rational way to assign probabilities to propositions regarding it

Bayesianism is perfectly capable of assigning probabilities here. You haven't actually argued for this claim, you're just asserting it.

>However, one counter-argument is to question the validity of this claim. Why there has to be a probability at all? Why can I just say "I don't know"?

You can, of course, do this for any question. You can refuse to make any predictions at all. What's unclear is why you're ok with predictions in general but not when there exist multiple copies of you.

>If we say there should be a probability to it, then it comes to the question of how. SSA or SIA, what counts as an observer, which reference class to use in which case etc. There's one judgment call after another. Paradoxes ensue.

I don't see any paradoxes. SIA is the natural setup, and observer is defined as any being that's subjectively indistinguishable from me. We don't need reference classes. I *know* that I'm subjectively indistinguishable from myself, so there's no need to consider any beings that don't know that. There are no judgement calls required.

Comment by ike on Reference Classes · 2020-09-07T14:24:19.924Z · score: 2 (2 votes) · LW · GW

I wrote the following comment over there which seemed to be caught in the spam filter or something:

Both Sue and the third party are rational, and them knowing all objective facts about everyone's experiences does not eliminate the disagreement.

The reason is because the evidence is anthropic in nature - it is more likely under certain hypotheses that affect the probability of "you" existing or "you" having certain experiences, above and beyond objective facts. Such evidence is agent-centered.

For example, Sue's evidence raises the probability of the hypothesis "God exists and cares about me in particular" for her, but not for the third party. Of course, the third party's probability of "God cares about Sue in particular" goes up. But that has a lower prior probability when it's about someone else, because that hypothesis also predicts that "I" will "be Sue" more than the baseline expectation of 1 in 7 billion or so.

In general, the class of hypotheses that Sue's evidence favors also tends to make Sue's existence and sentience more likely. Since Sue knows that she exists and is sentient but does not know that about anyone else, she starts with a higher prior probability in that class of hypotheses, and therefore the same update as third parties will result in a higher posterior probability for that class.

This also means that Sue's family and barista rationally conclude somewhat weaker versions of the class of hypotheses - their priors should be higher than a random third party, but lower than Sue's priors.

This is analogous to the argument with lottery winners here:

But what is your watching friend supposed to think? Though his predicament is perfectly predictable to you - that is, you expected before starting the experiment to see his confusion - from his perspective it is just a pure 100% unexplained miracle. What you have reason to believe and what he has reason to believe would now seem separated by an uncrossable gap, which no amount of explanation can bridge. This is the main plausible exception I know to Aumann's Agreement Theorem.

Pity those poor folk who actually win the lottery! If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)

Your example with Sue is the same, just at a smaller scale with less evidence and therefore less strong conclusions.

Re your ethics example, you're assuming that knowledge of others' intuitions counts as moral evidence. Even if that were the case, knowledge of a single person's intuition is plausibly not enough to shift from uncertain to confident in one position, or vice versa.

Comment by ike on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-05T03:30:01.256Z · score: 1 (1 votes) · LW · GW

Re your "takedown" of anthropic arguments:

For Presumptuous Philosopher: you reject the "prior probability that “I” actually exist." But this is a perfectly valid Bayesian update. You update on your entire lifetime of observations. If one theory says those observations are one in a billion likely, and another theory says they're one in a trillion, then you update in favor of the former theory.

I do think the doomsday argument fails, because your observations aren't any less likely in a doomsday scenario vs not. This is related to the fact that I accept SIA and not SSA.

>It should be noted this counter-argument states the probability of “me” being simulated is a false concept

I actually agree with this, if the assumption is that the simulation is perfect and one can't tell in principle if they're in a simulation. Then, I think the question is meaningless. But for purposes of predicting anomalies that one would expect in a simulation and not otherwise, it's a valid concept.

Dr Evil is complicated because of blackmail concerns.

Comment by ike on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-05T03:02:06.165Z · score: 1 (1 votes) · LW · GW

>From the subject’s first-person perspective there is no new information at the time of wake-up. Some arguments suggest there is new information that the subject learned “I” exist. However, from the first-person perspective, recognizing the perspective center is the starting point of further reasoning, it is a logical truth that “I” exist. Or simply, “It is guaranteed to find myself exist.” Therefore “my” own existence cannot be treated as new evidence.

I agree one can't update on existing. But you aren't updating on wake-up - you can make this prediction at any point. Before you go to sleep, you predict "when I wake up, it will be in a world where the coin landed heads 1/3 of the time". No updating required.

Comment by ike on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-05T02:45:41.830Z · score: 1 (1 votes) · LW · GW

I don't agree with your argument against self-locating-uncertainty.

I define such questions in terms of expectations. So your question 2 is meaningful because it represents your expectations as to what you'll see on the label. Question 3 is likewise meaningful, because it represents expectations for anything that causally depends on who the original was - e.g. you ask Omega "was I the original"?

If you suppose there's literally no record anywhere, and therefore no difference in expectation, then I might agree that it's meaningless. But anthropic questions are typically not like that.

Comment by ike on This Territory Does Not Exist · 2020-09-02T13:15:32.867Z · score: 1 (1 votes) · LW · GW

I haven't written specifically about goals, but being that claims about future experiences are coherent, preferences over the distribution of such are also, and one can act on their beliefs about how their actions affect said distribution. This doesn't require the past to exist.

Comment by ike on This Territory Does Not Exist · 2020-09-02T04:04:36.190Z · score: 1 (1 votes) · LW · GW

I'm not sure. I think even if the strong claim here is wrong and realism is coherent, it's still fundamentally unknowable, and we can't get any evidence at all in favor. That might be enough to doom altruism.

It's hard for me to reason well about a concept I believe to be incoherent, though.

Comment by ike on This Territory Does Not Exist · 2020-09-01T22:53:17.937Z · score: 1 (1 votes) · LW · GW

Altruism is a preference. On my view, that preference is just incoherent, because it refers to entities that are meaningless. But even without that, there's no great argument for why anyone should be altruistic, or for any moral claims.

I don't think it's possible in principle to configure a mind to pursue incoherent goals. If it was accepted to be coherent, then it would be possible.

Comment by ike on What am I missing? (quantum physics) · 2020-08-21T15:50:29.106Z · score: 17 (9 votes) · LW · GW

Because it's possible to do things that would be impossible with a hidden local variable theory such as you're describing. See Bell's theorem or, a game at which a quantum strategy can beat any classical strategy.

Comment by ike on This Territory Does Not Exist · 2020-08-17T19:12:03.314Z · score: 1 (1 votes) · LW · GW

Those words are interchangable. Not sure what your point is.

Comment by ike on This Territory Does Not Exist · 2020-08-17T17:53:54.672Z · score: 1 (1 votes) · LW · GW

Which definition have you put forward? My complaint is that the definitions are circular.

>The counter argument is that since realism is valuable, at least to some, verificationism is too limited as a theory of meaning .

I would deny that one can meaningfully have preferences over incoherent claims, and note that one can't validly reason that something is coherent based on the fact that one can have a preference over it, as that would be question-begging.

That said, if you have a good argument for why realism can be valuable, it might be relevant. But all you actually have is an assertion that some find it valuable.

Meanwhile, you've asserted both that communication implies meaning, and that parties to a communication can be mistaken about what something means. I don't see how the two are consistent.

Comment by ike on This Territory Does Not Exist · 2020-08-17T17:49:12.797Z · score: 1 (1 votes) · LW · GW

No, I've objected that the alternative definitions are circular, and assume the coherency as part of the definition. That is a valid critique even without assuming that it's incoherent from the outset.