The Short Case for Verificationism

post by ike · 2020-09-11T18:48:00.372Z · LW · GW · 57 comments

Contents

58 comments

Follow-up to: https://www.lesswrong.com/posts/PSichw8wqmbood6fj/this-territory-does-not-exist [LW · GW]

Here's a simple and direct argument for my version of verificationism.

Note that the argument uses ontological terms that are meaningless on my views. It functions as a reductio - either one must accept the conclusion, or accept that some of the premises are meaningless, which amounts to the same thing.

Premise 1: The level IV multiverse is possible.

Premise 2: If the level IV multiverse is possible, then we cannot know that we are not in it.

Lemma 1: We cannot know that we are not in the level IV multiverse.

Premise 3: If we are in the level IV multiverse, then ontological claims about our world are meaningless, because we simultaneously exist in worlds where they are true and worlds where they are not true.

Lemma 2: If we can know that ontological claims are meaningful, then we can know we're not in the level IV multiverse.

Conclusion: We cannot know that ontological claims about our world are meaningful.

Edited to add two lemmas. Premises and conclusion unchanged.

57 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2020-09-11T21:55:45.208Z · LW(p) · GW(p)

It might help if you could be more specific on what it means for a statement to be "meaningless". Simply unable to treat it as a fact?

Replies from: ike
comment by ike · 2020-09-11T22:08:43.200Z · LW(p) · GW(p)

I think a claim is meaningful if it's possible to be true and possible to be false. Of course this puts a lot of work on "possible".

Replies from: TAG, gworley, benjy-forstadt-1
comment by TAG · 2020-09-12T15:09:19.209Z · LW(p) · GW(p)

I think a claim is meaningful if it’s possible to be true and possible to be false. Of course this puts a lot of work on “possible”.

That's not the standard verificationist claim, which is more that things are meaningful if they can be verified as true or false.

Replies from: ike
comment by ike · 2020-09-12T15:19:55.352Z · LW(p) · GW(p)

No, it would be circular if I defined meaning to already include the verificationist claim.

Rather, I define meaning in other terms and then argue that this implies the verificationist claim.

Replies from: TAG
comment by TAG · 2020-09-12T15:29:21.829Z · LW(p) · GW(p)

There are gaps in the argument, then.

Replies from: ike
comment by ike · 2020-09-12T15:48:43.658Z · LW(p) · GW(p)

My premises imply the conclusion. You might not like some of the premises, perhaps.

Replies from: TAG
comment by TAG · 2020-09-12T17:52:16.667Z · LW(p) · GW(p)

I don't think they do. But that should not be in dispute. The point of a logical argument is to achieve complete clarity about the premises and the way they imply the conclusion.

Replies from: ike, ike
comment by ike · 2020-09-12T18:08:02.482Z · LW(p) · GW(p)

I added two lemmas to clarify. I guess you could quibble with lemma 2, I think it does follow if we assume that we know or at least can know premise 3, but that seems plausible if you're willing to accept it as a premise at all.

comment by ike · 2020-09-12T18:01:05.340Z · LW(p) · GW(p)

I could break it up into more steps if it's not entirely clear that the premises imply the conclusion.

comment by Gordon Seidoh Worley (gworley) · 2020-09-11T23:47:27.323Z · LW(p) · GW(p)

So, I think the crux of why I don't really agree with your general gist and why I'm guessing a lot of people don't, is we see meaningfulness as something bigger than just whether or not something is a fact (statement that has a coherent truth value). To most I think, something is meaningful if it somehow is grounded in external reality, not whether or not it can be assess to be true, and many things are meaningful to people that we can't assess the truthiness of. You seem to already agree that there are many things of which we cannot speak of facts, yet these non-facts are not meaningless to people. Just for example, you perhaps can't speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them. There's a kind of deflation of what you mean by "meaning" here that, to me, makes this position kind of boring and useless, since most of the interesting stuff we need to deal with is now outside the realm of facts and "meaning" by your model.

Replies from: ike
comment by ike · 2020-09-11T23:54:47.777Z · LW(p) · GW(p)

> To most I think, something is meaningful if it somehow is grounded in external reality

This is of course circular when trying to justify the meaningfulness of this "external reality" concept.

>we can't assess the truthiness of

This is one way to state verificationism, but I'm not *assuming* this, I'm arguing for it.

>Just for example, you perhaps can't speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.

It might be meaningless to me, and meaningful for them.

>most of the interesting stuff we need to deal with is now outside the realm of facts and "meaning" by your model.

What interesting stuff do we need to deal with that don't affect our decisions or probability distributions over our experiences? Since my model doesn't affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I'm not necessarily standing by that, I prefer to let utility function range broadly.)

comment by Benjy Forstadt (benjy-forstadt-1) · 2020-09-11T22:23:57.618Z · LW(p) · GW(p)

I think for your purposes you can just define meaningless as “neither true nor false” without detouring into possibility

Replies from: ike
comment by ike · 2020-09-11T22:34:37.059Z · LW(p) · GW(p)

For this argument, yes, but it might not be sufficient when extending it beyond just uncertainty in verificationism.

comment by Benjy Forstadt (benjy-forstadt-1) · 2020-09-11T20:36:26.020Z · LW(p) · GW(p)

This looks like an argument, not for verificationism, but for the impossibility of knowing that verificationism is false. This seems unproblematic to me.

I am also skeptical of premise 3. It relies on a certain conception of personal identity in Level IV - that in some sense we are all our copies in the multiverse, so they count as a single observer

Replies from: ike
comment by ike · 2020-09-11T21:05:18.890Z · LW(p) · GW(p)

At the very least, it should make verificationism more plausible to people who consider the level IV multiverse plausible.

I think the argument might go through in a weaker form with lower levels. But I suspect many people are already verificationist in that weaker form, to some extent. E.g. if you subscribe to "the electron is not in state A or state B until we measure it", then you're committed to a mild form of verificationism corresponding to the level III multiverse.

Compare to my argument - the ontological statements it applies to are just those statements that are both true and false on different parts of the multiverse containing us. I think this directly corresponds, in the case of the quantum multiverse, to what many people would consider things that lack a fact of the matter either way.

Replies from: benjy-forstadt-1
comment by Benjy Forstadt (benjy-forstadt-1) · 2020-09-11T21:47:48.239Z · LW(p) · GW(p)

I think I agree it makes verificationism a bit more plausible if you already find Tegmark IV plausible.

Regarding the quantum multiverse - yes, I agree, that is the usual way of thinking about things, moreover the usual thinking is that most ordinary statements about the future are similarly indeterminate. On the other hand, this isn’t the usual thinking about the cosmological multiverse. In a quantum multiverse, universes literally branch, in a cosmological multiverse, universes merely diverge. So, assuming the usual views about these multiverses are correct, would Level IV be like the quantum multiverse or would it be like the cosmological multiverse?

I can see both ways, but on reflection, it seems more natural to say it’s like the quantum multiverse. On the cosmological picture, for every consistent mathematical structure, you’ve got a separate self contained world, and there can be a lot of duplicate structure across worlds, but there can’t be duplicate worlds. It seems to make a weird distinction between worlds and the substructures in worlds. On the quantum picture, you can think of the observer as a mathematical structure in its own right, that is instantiated in many different larger structures, and there’s no fundamental notion of “world”.

So fine, this gives you something like verificationism.

Replies from: ike
comment by ike · 2020-09-11T22:10:14.875Z · LW(p) · GW(p)

I'm not sure what you mean by cosmological multiverse.

Re distinction between branching and diverging - I think even without adopting verificationism, one can plausibly argue that that distinction is meaningless.

Replies from: benjy-forstadt-1
comment by Benjy Forstadt (benjy-forstadt-1) · 2020-09-11T22:21:13.890Z · LW(p) · GW(p)

By cosmological multiverse, I mean Level I or II. It is arguable that the distinction between branching and diverging is meaningless, or that Level I and II should be viewed as branching, but that is not the usual view.

I think it’s clear it’s not meaningless, and that those who think it’s meaningless just favor viewing every kind of splitting as branching. Let me explain: To say the future branches, what I mean is that there is no fact of the matter what exactly will happen in the future. To say the future diverges, what I mean, is that there is a fact of the matter about what will happen in the future, but that there are observers just like me who will observe a different future.

Either there is a fact of the matter what will happen in the future, or there isn’t (?!). It may indeed be the case that the concept of diverging is incoherent, in which case the only kind of splitting is branching. This is a heterodox view, however.

Replies from: ike
comment by ike · 2020-09-11T22:24:21.854Z · LW(p) · GW(p)

So your claim that it's not meaningless is basically just the negation of my third premise.

Replies from: benjy-forstadt-1
comment by TAG · 2020-09-12T15:04:51.502Z · LW(p) · GW(p)

If we are in the level IV multiverse, then ontological claims about our world are meaningless, because we simultaneously exist in worlds where they are true and worlds where they are not true

There are a number of different claims jammed together, there.

Firstly, its not clear whether a level IV multiverse means there are multiple copies of you with identical brain states but living in different environments -- why would you be in a "seeing a tree" brain state if you are on a planet with no trees? It's also not clear whether you can have qualitatively identical counterparts that don't share brain states -- what then makes them identical? It's also not clear that qualitative identity is sufficient for numerical identity. Basically, multiversal theories don't supply a theory of personal identity -- that has to be an additional assumption.

(Oddly enough, there is a better way of getting to the same point. If we can't prove that we are not living in a simulation, then we can't resolve basic ontological questions, even if we have excellent predictive theories).

Secondly, even if ontological questions can't be answered , that doesn't mean they are meaningless, under the one and only definition of "meaning" anyone ever had.

So, I think the crux of why I don’t really agree with your general gist and why I’m guessing a lot of people don’t, is we see meaningfulness as something bigger than just whether or not something is a fact (statement that has a coherent truth value).

Indeed!

Replies from: ike
comment by ike · 2020-09-12T15:29:34.629Z · LW(p) · GW(p)

Firstly, its not clear whether a level IV multiverse means there are multiple copies of you with identical brain states but living in different environments -- why would you be in a "seeing a tree" brain state if you are on a planet with no trees?

It's very clear, to the contrary. A universe with an identical brain state but without any trees absolutely is part of it, there is a mathematical structure corresponding to the brain state that doesn't also contain trees.

what then makes them identical

My standard criteria is subjective indistinguishability. Any universe that we can't tell that we're not in contains a copy of us.

Re simulation: level IV contains a broad range of theories even weirder than simulations of us, infinitely many of which make ontological claims come out alternatively true or false. We certainly can't prove any ontological claim.

Secondly, even if ontological questions can't be answered , that doesn't mean they are meaningless, under the one and only definition of "meaning" anyone ever had.

We'd at least have to significantly complicate the definition. I think people generally intend and expect ontological claims to have a coherent truth value, and if that's not the case then there's no referent for what people use ontological claims for. If you don't want to call it meaningless, fine, but it's certainly weird.

Replies from: TAG
comment by TAG · 2020-09-12T18:13:59.175Z · LW(p) · GW(p)

You are making two claims..about whether ontological indeterminacy holds, and about the meaning of "meaning" .

Setting aside the second claim , the first claim rests on an assumption that the only way to judge a theory is direct empiricism. But realists tend to have other theoretical desiderata in mind..a lot would reject simulations and large universes on the basis of Occams Razor, for instance.

As for the rest..you might have a valid argument that it's inconsistent to believe in both empirical realism and large universes.

Replies from: ike
comment by ike · 2020-09-12T18:18:26.347Z · LW(p) · GW(p)

I accept Occam, but for me it's just a way of setting priors in a model using to make predictions.

And part of my argument here is how the mere possibility of large universes destroys the coherency of realism. Even those rejecting simulations would still say it's possible.

(Putnam would say it's meaningless, and I would in fact agree but for different reasons.)

Replies from: TAG
comment by TAG · 2020-09-12T19:21:13.964Z · LW(p) · GW(p)

I accept Occam, but for me it’s just a way of setting priors in a model using to make predictions.

But you don't have a proof that that is the only legitimate use of Occam. If realists can use Occam to rescue realism, then realism gets rescued.

And part of my argument here is how the mere possibility of large universes destroys the coherency of realism. Even those rejecting simulations would still say it’s possible.

That would be the sort of problem that probablistic reasoning addresses.

Replies from: ike
comment by ike · 2020-09-12T19:33:59.898Z · LW(p) · GW(p)

>But you don't have a proof that that is the only legitimate use of Occam. If realists can use Occam to rescue realism, then realism gets rescued.

Surely the burden of proof is on someone suggesting that Occam somehow rescues realism.

Besides, level IV is arguably simpler than almost any alternative, including a singleton universe.

Replies from: TAG
comment by TAG · 2020-09-12T19:42:08.182Z · LW(p) · GW(p)

Surely the burden of proof is on someone suggesting that Occam somehow rescues realism

That's not sure at all. Anti realism is quite contentious.

Besides, level IV is arguably simpler than almost any alternative, including a singleton universe.

It can come out as very simple or very complex depending on how you construe Occam.

Replies from: ike
comment by ike · 2020-09-12T19:57:54.021Z · LW(p) · GW(p)

Anti realism is quite contentious.

That doesn't mean that Occam grounding realism is at all plausible. I've laid out an argument for verificationism here and met my burden of proof. Suggesting that there might possibly be a counterargument isn't meeting the opposing burden of proof.

Replies from: TAG
comment by TAG · 2020-09-12T20:42:25.369Z · LW(p) · GW(p)

Meeting contrary arguments is part of a making an argument. There definitely is such a counterargument, even if you have never heard of it. That's what steel manning and strong manning are all about.

Replies from: ike
comment by ike · 2020-09-12T21:09:10.548Z · LW(p) · GW(p)

I don't know what a plausible version of the argument you're hinting at would look like. If you think there's such a plausible argument, please point me at it.

Replies from: TAG
comment by TAG · 2020-09-14T18:25:52.899Z · LW(p) · GW(p)

I don't know what your thoughts on plausibility are. But multiversal theories are straightforwardly excluded by the original version of Occams Razor, the one about not multiplying entities.

Replies from: ike
comment by ike · 2020-09-14T21:41:30.078Z · LW(p) · GW(p)

To the extent Occam is interpreted as saying that more complicated theories are impossible, as opposed to unlikely, it's not plausible.

As above, my claim rests only on the possibility of a multiverse.

Replies from: TAG
comment by TAG · 2020-09-15T11:07:04.412Z · LW(p) · GW(p)

Why should something that is possible but low probability have so much impact?

Replies from: ike
comment by ike · 2020-09-15T13:19:14.243Z · LW(p) · GW(p)

The argument is laid out in OP.

Replies from: TAG
comment by TAG · 2020-09-17T18:14:01.549Z · LW(p) · GW(p)

I don't see any mention of probability.

Replies from: ike
comment by ike · 2020-09-17T18:52:41.512Z · LW(p) · GW(p)

The argument does not depend on probability.

If you disagree with the conclusion, please explain which premise is wrong, or explain how the conclusion can be false despite all premises holding.

Replies from: TAG
comment by TAG · 2020-09-17T19:32:34.685Z · LW(p) · GW(p)

Since the argument does not mention probability, it doesn't refute the counterargument that unlikely scenarios involving simulations or multiple universes don't significantly undermine the ability to make claims about ontology.

Replies from: ike
comment by ike · 2020-09-17T22:04:29.653Z · LW(p) · GW(p)

That's not a counterargument, as it's fully consistent with the conclusion.

Replies from: TAG
comment by TAG · 2020-09-18T10:04:18.088Z · LW(p) · GW(p)

don’t significantly undermine the ability to make claims about ontology.

Replies from: ike
comment by ike · 2020-09-18T15:00:44.003Z · LW(p) · GW(p)

If you take that as a premise and you consider it contradictory to my conclusion and you accept my premises, then the premises you accept imply a contradiction. That's your problem, not mine.

Replies from: TAG
comment by TAG · 2020-09-21T17:49:35.744Z · LW(p) · GW(p)

Read back,there's an even number of negatives.

Replies from: ike
comment by ike · 2020-09-21T19:55:47.943Z · LW(p) · GW(p)

Not following. Can you state your point plainly? Which part of my argument do you reject?

comment by Dach · 2020-09-29T02:42:16.369Z · LW(p) · GW(p)

It's a well known tragedy that (unless Humanity gains a perspective on reality far surpassing my wildest expectations) there are arbitrarily many nontrivially unique theories which correspond to any finite set of observations.

The practical consequence of this (A small leap, but valid) is that we can remove any idea you have and make exactly the same predictions about sensory experiences by reformulating our model. Yes, any idea. Models are not even slightly unique- the idea of anything "really existing" is "unnecessary", but literally every belief is "unnecessary". I'd expect some beliefs would, for the practical purposes of present-day-earth human brains, be impossible to replace, but I digress.

(Joke: what's the first step of more accurately predicting your experiences? Simplifying your experiences! Ahaha!)

You cannot "know" anything, because you're experiencing exactly the same thing as you could possibly be experiencing if you were wrong. You can't "know" that you're either wrong or right, or neither, you can't "know" that you can't "know" anything, etc. etc. etc.

There are infinitely many different ontologies which support every single piece of information you have ever or will ever experience.

In fact, no experience indicates anything- we can build a theory of everything which explains any experience but undermines any inferences made using it, and we can do this with a one-to-one correspondence to theories that support that inference. 

In fact, there's no way to draw the inference that you're experiencing anything. We can build infinitely many models (Or, given the limits on how much matter you can store in a Hubble volume, an arbitrarily large but finite number of models) in which the whole concept of "experience" is explained away as delusion...

And so on!

The main point of making beliefs pay rent is having a more computationally efficient model- doing things more effectively. Is your reformulation more effective than the naïve model? No.

Your model, and this whole line of thought, is not paying rent.

Replies from: ike, TAG
comment by ike · 2020-10-01T15:06:48.382Z · LW(p) · GW(p)

We can build infinitely many models (Or, given the limits on how much matter you can store in a Hubble volume, an arbitrarily large but finite number of models) in which the whole concept of "experience" is explained away as delusion

This is false. I actually have no idea what it would mean for an experience to be a delusion - I don't think that's even a meaningful statement.

I'm comfortable with the Cartesian argument that allows me to know that I am experiencing things.

Your model, and this whole line of thought, is not paying rent.

On the contrary, it's the naive realist model that doesn't pay rent by not making any predictions at all different from my simpler model.

I don't really care if one includes realist claims in their model. It's basically inert. It just makes the model more complicated for no gain.

Replies from: Dach
comment by Dach · 2020-10-01T22:22:16.172Z · LW(p) · GW(p)

This is false. I actually have no idea what it would mean for an experience to be a delusion - I don't think that's even a meaningful statement.

I'm comfortable with the Cartesian argument that allows me to know that I am experiencing things.

Everything you're thinking is compatible with a situation in which you're actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.

There are an exhausting number of different riffs on this idea- maybe you're in an asylum and all of your thinking including "I actually have no idea what it would mean for an experience to be a delusion" is due to some major mental disorder. Oh, how obvious- my idea of experience was a crazy delusion all along. I can't believe I said that it was my daughter's arm [LW · GW]. "I think therefore I am"? Absurd!

If you have an argument against this problem, I am especially interested in hearing it- it seems like the fact you can't tell between this situation and reality (and you can't know whether this situation is impossible as a result, etc.) is part of the construction of the scenario. You'd need to show that the whole idea that "We can construct situations in which you're having exactly the same thoughts as you are right now, but with some arbitrary change (Which you don't even need to believe is theoretically possible or coherent) in the background" is invalid.

Do I think this is a practical concern? Of course not. The Cartesian argument isn't sufficient to convince me, though- I'm just assuming that I really exist and things are broadly as they seem. I don't think it's that plausible to expect that I would be able to derive these assumptions without using them- there is no epistemological rock bottom.

On the contrary, it's the naive realist model that doesn't pay rent by not making any predictions at all different from my simpler model.

Your model is (I allege) not actually simpler. It just seems simpler because you "removed something" from it. A mind could be much "simpler" than ours, but also less useful- which is the actual point of having a simpler model. The "simplest" model which accurately predicts everything we see is going to be a fundamental physical theory, but making accurate predictions about complicated macroscopic behavior entirely from first principles is not tractable with eight billion human brains worth of hardware.

The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world? I can't see why that would be the case- it requires occasionally spending my scare brainpower on reformatting my basic experience of the world in more complicated terms, I have to think about whether or not I should argue with someone whenever they bring up the idea of naïve realism, etc. You claim adopting the "simpler" model doesn't change your predictions, so I don't see what justifies these costs. Are there some major hidden costs of naïve realism that I'm not aware of? Am I actually wasting more unconscious brainpower working with the idea of "reality" and things "really existing"?

If I have to choose between two models which make the exact same predictions (i.e. my current model and your model), I'm going to choose between the model which is better at achieving my goals. In practice, this is the more computationally efficient model, which (I allege) is my current model.

Replies from: ike
comment by ike · 2020-10-01T23:44:26.310Z · LW(p) · GW(p)

>Everything you're thinking is compatible with a situation in which you're actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.

I disagree, and see no reason to agree. You have not fully specified this situation, and have offered no argument for why this situation is coherent. Being as this is obviously self-contradictory (at least the part about logic), why should I accept this? 

>If you have an argument against this problem, I am especially interested in hearing it

The problem is that you're assuming that verificationism is false in arguing against it, which is impermissible. E.g. "maybe you're in an asylum" assumes that it's possible for an asylum to "exist" and for someone to be in it, both of which are meaningless under my worldview. 

Same for any other way to cash out "it's all a delusion" - you need to stipulate unverifiable entities in order to even define delusion. 

Now, this is distinct from the question of whether I should have 100% credence in claims such as 2+2=4 or "I am currently having an experience". I can have uncertainty as to such claims without allowing for them to be meaningfully false. I'm not 100% certain that verificationism is valid. 

>It seems like the fact you can't tell between this situation and reality

What do you mean by "reality"? You keep using words that are meaningless under my worldview without bothering to define them. 

>The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world?

This isn't relevant to the truth of verificationism, though. My argument against realism is that it's not even coherent. If it makes your model prettier, go ahead and use it. You'll just run into trouble if you try doing e.g. quantum physics and insist on realism - you'll do things like assert there must be loopholes in Bell's theorem, and search for them and never find them. 

Replies from: Dach
comment by Dach · 2020-10-02T02:18:57.351Z · LW(p) · GW(p)

E.g. "maybe you're in an asylum" assumes that it's possible for an asylum to "exist" and for someone to be in it, both of which are meaningless under my worldview. 

What do you mean by "reality"? You keep using words that are meaningless under my worldview without bothering to define them. 

You're implementing a feature into your model which doesn't change what it predicts but makes it less computationally efficient.

The fact you're saying "both of which are meaningless under my worldview" is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There's no coherent excuse for this.

This isn't relevant to the truth of verificationism, though. My argument against realism is that it's not even coherent. If it makes your model prettier, go ahead and use it.

What does it mean for your model to be "true"? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.

It's not about making my model "prettier"- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.

You'll just run into trouble if you try doing e.g. quantum physics and insist on realism - you'll do things like assert there must be loopholes in Bell's theorem, and search for them and never find them. 

My preferred interpretation of quantum physics (many worlds) was made before bell's theorem, and it turns out that bell's theorem is actually strong evidence in favor of many worlds. Bell's theorem does not "disprove realism", it just disproves hidden variable theories. My interpretation already predicted that.

I suspect this isn't going anywhere, so I'm abdicating.

Replies from: ike
comment by ike · 2020-10-02T03:20:27.852Z · LW(p) · GW(p)

>The fact you're saying "both of which are meaningless under my worldview" is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. 

I don't think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine - what I can't model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying "delusion" doesn't clear up what you mean. 

Saying "(True ^ True) = False" also doesn't convey information. I don't know what is meant by a world in which that holds, and I don't think you know either. Being able to say the words doesn't make it coherent. 

You went to some severe edge cases here - not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don't seem like powers even an omnipotent being would possess, so I'm skeptical that those are meaningful, even if I was wrong about verificationism in general. 

For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences. 

>What does it mean for your model to be "true"?

Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of "verificationism is true" is just that all statements that verificationism says are incoherent are in fact incoherent. 

>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded. 

I didn't respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are "reality". Solomonoff doesn't even have a way to mark a model as "real", that's just completely out of scope. 

I'm not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I'm not interested in debating that. If you have a counter-argument to anything I've said, or a reason to think ontological statements are coherent, I'm interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn't interesting to me now. 

comment by TAG · 2020-09-29T09:46:42.541Z · LW(p) · GW(p)

but literally every belief is “unnecessary”.

You can't define things as (un)necessary without knowing what you value or what goal you are trying to achieve. Assuming that only prediction is valuable is pretty question begging.

Replies from: Dach
comment by Dach · 2020-09-30T00:42:38.305Z · LW(p) · GW(p)

This is only true for trivial values, e.g. "I terminally value having this specific world model".

For most utility schemes (Including, critically, that of humans), the supermajority of the purpose of models and beliefs is instrumental. For example, making better predictions, using less computing power, etc.

In fact, humans who do not recognize this fact and stick to beliefs or models because they like them are profoundly irrational. If the sky is blue, I wish to believe the sky is blue, and so on. So, assuming that only prediction is valuable is not question begging- I suspect you already agreed with this and just didn't realize it.

In the sense that beliefs (and the models they're part of) are instrumental goals, any specific belief is "unnecessary". Note the quotations around "unnecessary" in this comment and the comment you're replying to. By "unnecessary" I mean the choice of which beliefs and which model to use is subject to the whims of which is more instrumentally valuable- in practice, a complex tradeoff between predictive accuracy and computational demands.

Replies from: TAG
comment by TAG · 2020-09-30T09:21:25.963Z · LW(p) · GW(p)

This is only true for trivial values, e.g. “I terminally value having this specific world model”.

It's also true for "I terminally value understanding the world, whatever the correct model is".

Replies from: Dach
comment by Dach · 2020-09-30T19:48:00.326Z · LW(p) · GW(p)

It's also true for "I terminally value understanding the world, whatever the correct model is".

I said e.g, not i.e, and "I terminally value understanding the world, whatever the correct model is" is also a case of trivial values. 

First, a disclaimer: It's unclear how well the idea of terminal/instrumental values maps to human values. Humans seem pretty prone to value drift- whenever we decide we like some idea and implement it, we're not exactly "discovering" some new strategy and then instrumentally implementing it. We're more incorporating the new strategy directly into our value network. It's possible (Or even probable) that our instrumental values "sneak in" to our value network and are basically terminal values with (usually) lower weights.

Now, what would we expect to see if "Understanding the world, whatever the correct model is" was a broadly shared terminal value in humans, in the same way as the other prime suspects for terminal value (survival instinct, caring for friends and family, etc)? I would expect:

  1. It's exhibited in the vast majority of humans, with some medium correlation between intelligence and the level to which this value is exhibited. (Strongly exhibiting this value tends to cause greater effectiveness i.e. intelligence, but most people already strongly exhibit this value)
  2. Companies to have jumped on this opportunity like a pack of wolves and have designed thousands of cheap wooden signs with phrases like "Family, love, 'Understanding the world, whatever the correct model is'".
  3. Movements which oppose this value are somewhat fringe and widely condemned.
  4. Most people who espouse this value are not exactly sure where it's from, in the same way they're not exactly sure where their survival instinct or their love for their family came from.

But, what do we see in the real world?

  1. Exhibiting this value is highly correlated with intelligence. Almost everyone lightly exhibits this value, because its practical applications are pretty obvious (Pretending your mate isn't cheating on you is just plainly a stupid strategy), but it's only strongly and knowingly exhibited among really smart people interested in improving their instrumental capabilities.
  2. Movements which oppose this value are common. 
  3. Most people who espouse this value got it from an intellectual tradition, some wise counseling, etc.
Replies from: TAG, simon-kohuch
comment by TAG · 2020-09-30T20:31:06.546Z · LW(p) · GW(p)

Now, what would we expect to see if “Understanding the world, whatever the correct model is” was a broadly shared terminal value in humans

I never claimed it was a broadly shared terminal value.

My argument is that you can't make a one-size-fits-all recommendation of realism or anti realism, because individual values vary.

Replies from: Dach
comment by Dach · 2020-10-01T03:12:47.803Z · LW(p) · GW(p)

Refer to my disclaimer for the validity of the idea of humans having terminal values. In the context of human values, I think of "terminal values" as the ones directly formed by evolution and hardwired into our brains, and thus broadly shared. The apparent exceptions are rarish and highly associated with childhood neglect and brain damage.

"Broadly shared" is not a significant additional constraint on what I mean by "terminal value", it's a passing acknowledgement of the rare counterexamples.

If that's your argument then we somewhat agree. I'm saying that the model you should use is the model that most efficiently pursues your goals, and (in response to your comment) that utility schemes which terminally value having specific models (and thus whose goals are most efficiently pursued through using said arbitrary terminally valued model and not a more computationally efficient model) are not evidently present among humans in great enough supply for us to expect that that caveat applies to anyone who will read any of these comments.

Real world examples of people who appear at first glance to value having specific models (e.g. religious people) are pretty sketchy- if this is to be believed, you can change someone's terminal values with the argumentative equivalent of a single rusty musket ball and a rubber band. That defies the sort of behaviors we'd want to see from whatever we're defining as a "terminal value", keeping in mind the inconsistencies between the way human value systems are structured and the way the value systems of hypothetical artificial intelligences are structured. 

The argumentative strategy required to convince someone to ignore instrumentally unimportant details about the truth of reality looks more like "have a normal conversation with them" than "display a series of colorful flashes as a precursor to the biological equivalent of arbitrary code execution" or otherwise psychologically breaking them in a way sufficient to get them to do basically anything, which is what would be required to cause serious damage to what I'm talking about when I say "terminal values" in the context of humans.

Replies from: TAG
comment by TAG · 2020-10-01T10:48:01.635Z · LW(p) · GW(p)

Refer to my disclaimer for the validity of the idea of humans having terminal values. In the context of human values, I think of “terminal values” as the ones directly formed by evolution and hardwired into our brains, and thus broadly shared. The apparent exceptions are rarish and highly associated with childhood neglect and brain damage.

The existence of places like LessWrong, philosophy departments, etc, indicate that people do have some sort of goal to understand things in general, aside from any nitpicking about what is a true terminal value.

If that’s your argument then we somewhat agree. I’m saying that the model you should use is the model that most efficiently pursues your goals,

Well, if my goal is the truth, I am going to want the model that corresponds the best, not the model that predicts most efficiently .

and (in response to your comment) that utility schemes which terminally value having specific models

I've already stated than I am not talking about confirming specific models .

Replies from: Dach
comment by Dach · 2020-10-02T08:48:58.227Z · LW(p) · GW(p)

The existence of places like LessWrong, philosophy departments, etc, indicate that people do have some sort of goal to understand things in general, aside from any nitpicking about what is a true terminal value.

I agree- lots of people (including me, of course) are learning because they want to- not as part of some instrumental plan to achieve their other goals. I think this is significant evidence that we do terminally value learning. However, the way that I personally have the most fun learning is not the way that is best for cultivating a perfect understanding of reality (nor developing the model which is most instrumentally efficient, for that matter). This indicates that I don't necessarily want to learn so that I can have the mental model that most accurately describes reality- I have fun learning for complicated reasons which I don't expect align with any short guiding principle.

Also, at least for now, I get basically all of my expected value from learning from my expectations for being able to leverage that knowledge. I have a lot more fun learning about e.g. history than the things I actually spend my time on, but historical knowledge isn't nearly as useful, so I'm not spending my time on it.

In retrospect, I should've said something more along the lines of "We value understanding in and of itself, but (at least for me, and at least for now) most of the value in our understanding is from its practical role in the advancement of our other goals."

I've already stated than I am not talking about confirming specific models.

There's been a mix-up here- my meaning for "specific" also includes "whichever model corresponds to reality the best"

comment by Simon Kohuch (simon-kohuch) · 2020-09-30T19:59:58.778Z · LW(p) · GW(p)

Looks like an issue of utility vs truth to me. Time to get deontological :) (joke)

comment by Cookie Factory (cookie-factory) · 2020-09-12T17:16:18.719Z · LW(p) · GW(p)

Like flowers in the spring and leaves in the fall, the decline phase within the civilization cycle brings out all the flavor du jour takes on solipsism and nihilism from the nerd demographic. The more things change the more they stay the same.