This Territory Does Not Exist

post by ike · 2020-08-13T00:30:25.700Z · LW · GW · 197 comments

Response to: Making Beliefs Pay Rent (in Anticipated Experiences) [LW · GW], [LW · GW] Belief in the Implied Invisible [LW · GW], and No Logical Positivist I [LW · GW]

I recently decided that some form of strong verificationism is correct - that beliefs that don't constrain expectation are meaningless (with some caveats). After reaching this conclusion, I went back and read EY's posts on the topic, and found that it didn't really address the strong version of the argument. This post consists of two parts - first, the positive case for verificationism, and second, responding to EY's argument against it.

The case for Strong Verificationism

Suppose I describe a world to you. I explain how the physics works, I tell you some stories about what happens in that world. I then make the dual assertions that:

1. It's impossible to reach that world from ours, it's entirely causally disconnected, and

2. That world "really exists"

One consequence of verificationism is that, if 1 is correct, then 2 is meaningless. Why is it meaningless? For one, it's not clear what it means, and every alternative description will suffer from similarly vague terminology. I've tried, and asked several others to try, and nobody has been able to give a definition of what it means for something to "really exist" apart from expectations that actually clarifies the question.

Another way to look at this is through the map-territory distinction. "X really exists" is a map claim (i.e a claim made by the map), not a territory claim, but it's about the territory. It's a category error and meaningless.

Now, consider our world. Again, I describe its physics to you, and then assert "This really exists." If you found the above counterintuitive, this will be even worse - but I assert this latter claim is also meaningless. The belief that this world exists does not constrain expectations, above and beyond the map that doesn't contain such a belief. In other words, we can have beliefs about physics that don't entail a belief in "actual existence" - such a claim is not required for any predictions and is extraneous and meaningless.

As far as I can tell, we can do science just as well without assuming that there's a real territory out there somewhere.

Some caveats: I recognize that some critiques of verificationism relate to mathematical or logical beliefs. I'm willing to restrict the set of statements I consider incoherent to ones that make claims about what "actually exists", which avoids this problem. Also, following this paradigm, one will end up with many statements of the form "I expect to experience events based on a model containing X", and I'm ok with a colloquial usage of exist to shorten that to "X exists". But when you get into specific claims about what "really exists", I think you get into incoherency.

Response to EY sequence

In Making Beliefs Pay Rent, he asserts the opposite without argument:

But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there.

He then elaborates:

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?
To answer precisely, you must use beliefs like Earth’s gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional.

I disagree with the last sentence. These beliefs are ways of saying "I expect my experiences to be consistent with my map which says g=9.8m/s^2, and also says this building is 120 meters tall". Perhaps the beliefs are a compound of the above and also "my map represents an actual world" - but as I've argued, the latter is both incoherent and not useful for predicting experiences.

In Belief in the Implied Invisible, he begins an actual argument for this position, which is continued in No Logical Positivist I. He mostly argues that such things actually exist. Note that I'm not arguing that they don't exist, but that the question of whether they exist is meaningless - so his arguments don't directly apply, but I will address them.

If the expansion of the universe is accelerating, as current cosmology holds, there will come a future point where I don't expect to be able to interact with the photon even in principle—a future time beyond which I don't expect the photon's future light cone to intercept my world-line.  Even if an alien species captured the photon and rushed back to tell us, they couldn't travel fast enough to make up for the accelerating expansion of the universe.
Should I believe that, in the moment where I can no longer interact with it even in principle, the photon disappears?
No.
It would violate Conservation of Energy.  And the second law of thermodynamics.  And just about every other law of physics.  And probably the Three Laws of Robotics.  It would imply the photon knows I care about it and knows exactly when to disappear.
It's a silly idea.

As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon "exists" is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.

Later on, he mentions Solomonoff induction, which is somewhat ironic because that is explicitly a model for prediction. Not only that, but the predictions produced with Solomonoff induction are from an average of many different machines, "containing" many different entities. The map of Solomonoff induction, in other words, contains far more entities than anyone but Max Tegmark believes in. If we're to take that seriously, then we should just agree that everything mathematically possible exists. I have much less disagreement with that claim (despite also thinking it's incoherent) than with claims that some subset of that multiverse is "real" and the rest is "unreal".

If you suppose that the photon disappears when you are no longer looking at it, this is an additional law in your model of the universe. 

I don't suppose that. I suppose that the concept of a photon actually existing is meaningless and irrelevant to the model.

When you believe that the photon goes on existing as it wings out to infinity, you're not believing that as an additional fact.
What you believe (assign probability to) is a set of simple equations; you believe these equations describe the universe.

This latter belief is an "additional fact". It's more complicated than "these equations describe my expectations".

To make it clear why you would sometimes want to think about implied invisibles, suppose you're going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster.  By the time the spaceship gets there and sets up a colony, the universe's expansion will have accelerated too much for them to ever send a message back.  Do you deem it worth the purely altruistic effort to set up this colony, for the sake of all the people who will live there and be happy?  Or do you think the spaceship blips out of existence before it gets there?  This could be a very real question at some point.

This is a tough question, if only because altruism is complicated to ground on my view - if other people's existence is meaningless, in what sense can it be good to do things that benefit other people? I suspect it all adds up to normality. Regardless, I'll note that the question applies on my view just as much to a local altruistic act, since the question of whether other people have internal experiences would be incoherent. If it adds up to normality there, which I believe it does, then it should present no problem for the spaceship question as well. I'll also note that altruism is hard to ground regardless - it's not like there's a great altruism argument if only we conceded that verificationism is wrong.

Now for No Logical Positivist I.

This is the first post that directly addresses verificationism on its own terms. He defines it in a way similar to my own view. Unfortunately, his main argument seems to be "the map is so pretty, it must reflect the territory." It's replete with map-territory confusion:

By talking about the unseen causes of visible events, it is often possible for me to compress the description of visible events.  By talking about atoms, I can compress the description of the chemical reactions I've observed.

Sure, but a simpler map implies nothing about the territory.

Further on:

If logical positivism / verificationism were true, then the assertion of the spaceship's continued existence would be necessarily meaningless, because it has no experimental consequences distinct from its nonexistence.  I don't see how this is compatible with a correspondence theory of truth.

Sure, it's incompatible with the claim that beliefs are true if they correspond to some "actual reality" that's out there. That's not an argument for the meaning of that assertion, though, because no argument is given for this correspondence theory of truth - the link is dead, but the essay is at https://yudkowsky.net/rational/the-simple-truth/ and grounds truth with a parable about sheep. We can ground truth just as well as follows: a belief is a statement with implications as to predicted experiences, and a belief is true insofar as it corresponds to experiences that end up happening. None of this requires an additional assumption that there's an "actual reality".

Interestingly, in that post he offers a quasi-definition of "reality" that's worth addressing separately.

“Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”

Here, reality is merely a convenient term to use, which helps conceptualize errors in the map. This doesn't imply that reality exists, nor that reality as a concept is coherent. I have beliefs. Sometimes these beliefs are wrong, i.e. I experience things that are inconsistent with those beliefs. On my terms, if we want to use the word reality to refer to a set of beliefs that would never result in such inconsistency, that's fine, and those beliefs would never be wrong. You could say that a particular belief "reflects reality" insofar as it's part of that set of beliefs that are never wrong. But if you wanted to say "I believe that electrons really exist", that would be meaningless - it's just "I believe that this belief is never wrong", which is just equal to "I believe this".

Moving back to the Logical Positivism post:

A great many untestable beliefs are not meaningless; they are meaningful, just almost certainly false:  They talk about general concepts already linked to experience, like Suns and chocolate cake, and general frameworks for combining them, like space and time.  New instances of the concepts are asserted to be arranged in such a way as to produce no new experiences (chocolate cake suddenly forms in the center of the Sun, then dissolves).  But without that specific supporting evidence, the prior probability is likely to come out pretty damn small - at least if the untestable statement is at all exceptional.
If "chocolate cake in the center of the Sun" is untestable, then its alternative, "hydrogen, helium, and some other stuff, in the center of the Sun at 12am on 8/8/1", would also seem to be "untestable [? · GW]": hydrogen-helium on 8/8/1 cannot be experientially discriminated against the alternative hypothesis of chocolate cake.  But the hydrogen-helium assertion is a deductive consequence of general beliefs themselves well-supported by experience.  It is meaningful, untestable (against certain particular alternatives), and probably true.

Again, the hydrogen-helium assertion is a feature of the map, not the territory. One could just as easily have a map that doesn't make that assertion, but has all the same predictions. The question of "which map is real" is a map-territory confusion, and meaningless.

I don't think our discourse about the causes of experience has to treat them strictly in terms of experience.  That would make discussion of an electron a very tedious affair.  The whole point of talking about causes is that they can be simpler than direct descriptions of experience.

Sure, as I mentioned above, I'm perfectly fine with colloquial discussion of claims using words like exist in order to make discussion / communication easier. But that's not at all the same as admitting that the claim that electrons "exist" is coherent, rather than a convenient shorthand to avoid adding a bunch of experiential qualifiers to each statement.

197 comments

Comments sorted by top scores.

comment by ChristianKl · 2020-08-13T12:15:16.542Z · LW(p) · GW(p)

How do you deal with Gödel's finding that every system is left with some questions that can't be resolved and where the answers can't be verified? 

Replies from: ike
comment by ike · 2020-08-13T13:29:23.477Z · LW(p) · GW(p)

That's about mathematical claims, where my argument doesn't apply.

Replies from: gworley, ChristianKl
comment by Gordon Seidoh Worley (gworley) · 2020-08-13T18:21:51.201Z · LW(p) · GW(p)

This woefully under appreciates Godel. Although it starts out as being about a particular mathematical system, it's actually about formal systems in general, so it's only not relevant if you are trying to suppose a sort of non-systematic epistemology where claims are not related to each other by any kind of rules, including basic "rules" like causality.

Replies from: ike
comment by ike · 2020-08-13T18:37:44.064Z · LW(p) · GW(p)

I understand that Godel applies to formal systems. But the claim that Godel makes is that some mathematical claims will be unprovable, which seems irrelevant to my arguments, which are not about mathematical claims.

What kinds of statements do you think Godel implies, which my epistemology as laid out in this post cannot handle?

comment by ChristianKl · 2020-08-13T15:27:10.444Z · LW(p) · GW(p)

It seems to me like any system where the term verificationism makes sense has to be a superset of mathematics and is thus subject to Gödel's findings. 

Replies from: ike
comment by ike · 2020-08-13T15:54:40.128Z · LW(p) · GW(p)

Sure, but why does it present problems for verificationism as I'm using it? I'm saying that the concept of external reality is meaningless, the existence of unprovable math questions seems orthogonal.

Replies from: ChristianKl, gworley
comment by ChristianKl · 2020-08-13T19:03:39.065Z · LW(p) · GW(p)

According to Gödel any formal system has things that can be true while still not being able to be proved (or verified) within the system. 

Replies from: ike
comment by ike · 2020-08-13T19:10:52.958Z · LW(p) · GW(p)

Sure. Those things are mathematical claims, and I exempted mathematical claims.

There are no claims of the form "X exists" inside my formal system, and those are the only claims I'm asserting are incoherent. (Or "X doesn't exist", etc.)

comment by Gordon Seidoh Worley (gworley) · 2020-08-13T18:23:54.414Z · LW(p) · GW(p)

This seems to imply I should ignore everything you and everyone else says, since if it's truly meaningless then there's no point in engaging. Ergo, why are you even in these comments saying anything or bothering to write an article, except as some figment of my imagination here to give me something to do?

Replies from: ike
comment by ike · 2020-08-13T18:40:45.830Z · LW(p) · GW(p)

My theory produces the exact same predictions as a theory containing the realism postulate. It shouldn't affect any of your decisions unless your decisions hinge on incoherent claims that are irrelevant to predictions. I think such decision theories are wrong in some sense. Regardless, I don't think that my arguments have any real relevance to what anyone should or should not do.

Replies from: TAG
comment by TAG · 2020-08-13T18:44:08.525Z · LW(p) · GW(p)

My theory produces the exact same predictions as a theory containing the realism postulate.

You haven't explained why predictions are the only thing that matter.

Replies from: ike
comment by ike · 2020-08-13T18:54:08.622Z · LW(p) · GW(p)

Matter in what context? I didn't discuss decision theory in the post (other than indirectly w.r.t. altruism), Gordon is the first to bring it up, I'm simply saying that I believe that decision theory shouldn't depend on incoherent claims. My point about predictions is because Gordon asked why something happened, and I'm making the point that both theories predict the same thing happening and explain it just as well.

Replies from: TAG
comment by TAG · 2020-08-14T11:13:56.519Z · LW(p) · GW(p)

You don't have any clear criteria for saying that things are coherent or incoherent.

The general idea of of realism adds value in that explains how science works, where observations come from, and so on. Thats EYs defense of it, and it has been referenced several times in the comments.

There may be specific issues in figuring out which specific theory to believe in, but that's another matter. There is a stable position where you accept realism, but don't invest in specific theories.

Given my intuitions about (in)coherence , having to take it in faith that science works, without having any idea why, is less coherent than the alternative!

Replies from: ike
comment by ike · 2020-08-14T15:44:32.059Z · LW(p) · GW(p)

You don't have any clear criteria for saying that things exist or don't exist.

An explanation that depends on incoherent claims isn't much of an explanation. Which specific post of EY's are you saying had this defense? I've already responded to all of his posts I could find that bear on the issue.

Take it on faith that science works

This is not required, and you're in exactly the same position whether or not you accept realism. Realism doesn't imply Occam's razor, which is required for induction and science more generally. EY has a post justifying Occam that does not require realism. I don't see what realism adds to the argument. See https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom [LW · GW]

Replies from: TAG
comment by TAG · 2020-08-15T10:26:49.044Z · LW(p) · GW(p)

Take it on faith that science works

This is not required

So how does science work?

You can't do it with occam alone. You need a source of data, and that data needs to have some discoverable consistencies. You can't perform induction on entropy .

Replies from: ike
comment by ike · 2020-08-15T15:06:41.308Z · LW(p) · GW(p)

Occam applied to the only data we have, which is our direct observations, says that the scientific method has been useful in the past and should be presumed to continue to be useful. EY laid out the argument in the post I linked to.

Replies from: TAG
comment by TAG · 2020-08-16T10:16:20.977Z · LW(p) · GW(p)

It doesn't tell you how it works.

Replies from: ike
comment by ike · 2020-08-16T14:56:56.676Z · LW(p) · GW(p)

Neither does realism.

"The universe exists and is inherently simple, therefore induction tends to work" and "Observations are well predicted by inductive formulas, therefore induction tends to work" are of the same form. The first is incoherent and the second is meaningful, but the conclusions are the same.

What is the exact argument and conclusion for which you're saying my view cannot reach?

Replies from: TAG
comment by TAG · 2020-08-17T10:39:49.752Z · LW(p) · GW(p)

“The universe exists and is inherently simple, therefore induction tends to work” and “Observations are well predicted by inductive formulas, therefore induction tends to work” are of the same form.

But not if the same content. The second doesn't tell you how induction works.

Replies from: ike
comment by ike · 2020-08-17T15:44:45.309Z · LW(p) · GW(p)

Neither does the first.

comment by Gordon Seidoh Worley (gworley) · 2020-08-14T03:24:43.373Z · LW(p) · GW(p)

Okay, circling around on this to maybe say something more constructive now that I've thought about it a bit.

Part of the problem is that your central thesis is not very clear on first read, so I had to think about it a bit to really get what your "big idea" or ideas are that motivate this post. I realize you say right up top that you believe a strong version of verificationism is correct, but to me that's not really getting at the core of what you're thinking, that's just something worked out by other people you can point at and say "something in that direction seems right".

(FWIW, even some of the people who came up with logical positivism and related ideas like verificationism eventually worked themselves into a corner and realized there was no way out and the whole thing fell apart. The arguments for why it doesn't work eventually get pretty subtle if you really press the issue, and I doubt I could do them justice, so I'll stay higher level and may not have the time and energy to address every objection that you could bring up, but basically there's 50+ years of literature trying to make ideas like this work and then finding there were inevitably problems.)

So, as best I can tell, the insight you had driving this was something like "oh, there's no way to fully ground most statements, thus those statements are meaningless". I'll respond based on this assumption.

Now, up to a point you are right. Because there is an epistemological gap between the ontological and the ontic, statements about the ontic are "meaningless" in that they are not fully grounded. This is due to epistemic circularity, a modern formulation of the problem of the criterion. Thus the only statements that can be true in a classical sense are statements about statements, i.e. you can only get "truth" from inside an ontology, and there is no matter of truth to be assessed about statements of any other kind. This, alas, is not so great, because it takes all the power out of our everyday experience of truth.

One possible response is to hold this classical notion of truth fixed and say all those statements that can't be grounded are false. The alternative is to say we screwed up on the classical notion of truth, and it's not the category we thought it was.

Which every path you take, they converge to the same place, because if you reject things as not having truth value, now you have to introduce some new concept to talk about what everyone in everyday speech thinks of as truth, and you'll be forced to go around annoying everyone with your jargon but whatever. The alternative is to accept that the classical notion of truth is malformed and not the category it was thought to be [LW · GW], and to rehabilitate truth in epistemology to match what people actually mean by it.

As I say, in the limit they converge to the same place, but in different language (cf. the situation with moral realism and anti-realism [LW · GW]). I'll speak now from the second perspective because it's mine, but there's one from the other side.

So then, if we were wrong that truth is about statements that can be proven true and grounded in reality, then what is truth? I take from your talk of "constrained expectations" you already see the gist of it: truth can be about predicting experiences, as in what is true is that which is part of a web of causality that generates our perceptions. This is messy, of course, because we're embedded in the world [LW · GW] and have to perceive it from inside it, but it gives us a way to make sense of what we really mean when we talk about truth, and to see that classical notions of truth were poor but reasonable first approximations of this, made with the intuitive assumption that there was ever some (classical) truth to know that was of a special kind. And on this view, truth of statements about the ontic are not meaningless; in fact, they are the only kind of statements you can make, because the ontological is ontic, but the ontic is not ontological.

Thus, also, why you have a comment section full of people arguing with you, because of course the natural notion of truth we have is meaningful, and it is only on a particular narrow view of it that it is not, which is right within itself but leaves out much and confusingly repurposes words to mean things they don't normally mean.

Replies from: Chris_Leong, ike
comment by Chris_Leong · 2020-09-17T08:58:29.449Z · LW(p) · GW(p)

What do you mean by ontic vs. ontological?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-09-17T21:22:38.183Z · LW(p) · GW(p)

Standard rationalist terminology would be roughly territory and map, respectively.

Replies from: Chris_Leong
comment by Chris_Leong · 2020-09-18T00:14:55.150Z · LW(p) · GW(p)

Is there any difference?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-09-19T00:49:56.364Z · LW(p) · GW(p)

Depends. In a certain vague sense, they are both okay pointers to what I think is the fundamental thing they are about, the two truths doctrine [LW · GW]. In another sense, no, because the map and territory metaphor suggests a correspondence theory of truth, whereas ontological and ontic about mental categories and being or existence, respectively, and historically tied to a different approaches to truth, namely those associated with transcendental idealism. And if you don't take my stance that they're both different aspects of the same way of understanding reality that are contextualized in different ways and thus both wrong at some limit but in different ways, then there is an ocean of difference between them.

comment by ike · 2020-08-14T04:50:44.483Z · LW(p) · GW(p)

My central thesis is "beliefs that don't constrain expectation are meaningless", or more specifically to avoid some of the pitfalls, "beliefs about external reality are meaningless."

For what it's worth, I came up with the view first, then someone suggested verificationism fit my views, I saw that it was a form of logical positivism, recalled the Sequence posts on that, and reread them.

>basically there's 50+ years of literature trying to make ideas like this work and then finding there were inevitably problems

I spent some time looking through critiques of verificationism. They all seemed to be disputing claims I don't agree with. The analytic/synthetic distinction is irrelevant to my argument, but much of the critiques I found centered on that. I'm not saying that beliefs about mathematics are meaningless, so Godel isn't relevant. It seems to me that all the problems are related to specific features of logical positivism proposals that I see no reason to accept.

I think people use truth/reality to mean several different things. As I said in the post, I'm ok with some of those uses. But there are definitely uses that appear meaningless and impossible to rescue, from my perspective - yet people will adamantly defend those uses. If by reality you just mean your best map that predicts experiences, that's fine. But people will absolutely defend a notion of some "objective reality" that "really exists", and there's just no meaningful account of that that I can think of. So no, I'm not repurposing words to mean something new - the actual meaning of the word in common usage is incoherent.

I don't know what your stance on moral realism is, but I'm confident there are places where I could make a post asserting moral anti-realism or error theory and get a bunch of smart people in the comments arguing with me. That doesn't imply that moral realism is meaningful, or that you need to invent a new concept to represent what people mean when they talk about morals. Incidentally, different moral realists have different accounts of the nature of such beliefs, and I assume the general populace that believes in morals would have different understandings of what that means, if asked.

If you think it's so obvious that the concept of external reality is coherent, then can you give an account of it? I don't need a super detailed technical account, but at least some description of what it would mean for something to exist or not, without referencing predictions, just in the abstract.

Replies from: TAG
comment by TAG · 2020-08-14T10:53:49.770Z · LW(p) · GW(p)

My central thesis is “beliefs that don’t constrain expectation are meaningless

Are you ever going to argue for that claim? It seems unreasinable that your opponents do all the work.

If you think it’s so obvious that the concept of external reality is coherent

And are you going to define "coherent"?

Replies from: ike
comment by ike · 2020-08-14T15:36:55.861Z · LW(p) · GW(p)

I gave arguments for that in the post. Instead of responding to those, you insisted there's some hypothetical value system that makes the concept of truth useful, and didn't really argue for its coherency at all.

The burden of proof is on people asserting the positive, not entirely on those trying to prove the negative. Pointing out that nobody has given an account of meaning, and 50+ comments later still nobody has done so, is very suggestive that nobody actually has one.

A binary attribute is coherent if it's possible for the attribute to hold as well as possible for the attribute not to hold. External reality is incoherent, because it never holds and it never doesn't hold - there's nothing that it would mean for it to hold or not hold.

Replies from: TAG
comment by TAG · 2020-08-14T15:59:17.282Z · LW(p) · GW(p)

there’s some hypothetical value system that makes the concept of truth useful

Theres a value system according to which truth is valuable and that is the one where truth itself is a terminal value.

If you want to argue that usefulness is the only true value, go ahead

nobody has given an account of meaning,

Shared meaning is what allows communication to take place. Therefore, communication taking place is evidence of shared meaning.

Do you think that verificationism is filling a vacuum ? That's it's the only theory of meaning anyone ever came up with? There are multiple fields that deal with the subject of meaning. There are multiple theories of meaning, not zero.

A binary attribute is coherent if it’s possible for the attribute to hold as well as possible for the attribute not to hold. External reality is incoherent, because it never holds and it never doesn’t hold—

Again: the existence of external reality can be supported by abductive reasoning: it's the best explanation for why science works at all.

Replies from: ike
comment by ike · 2020-08-14T16:16:29.540Z · LW(p) · GW(p)

You're entitled to put whatever you want in your value system, but if it references incoherent things then you just can't achieve it. Personally I would much prefer that 1+1=blrgh, but sadly not only does it equal 2, blrgh is meaningless. If I kept insisting that blrgh was meaningful and played a central role in my value system, what would you say? Also, I think truth can be defined without referencing external reality, just observations.

Shared meaning is what allows communication to take place. Therefore, communication taking place is evidence of shared meaning.

Evidence, but not definitive evidence, as I pointed out elsewhere. It's possible for people to be mistaken about what has meaning, and I argue they are so mistaken when talking about external reality.

Do you think that verificationism is filling a vacuum ? That's it's the only theory of meaning anyone ever came up with? There are multiple fields that deal with the subject of meaning.

I'm talking specifically about the meaning of external reality. I looked up critiques of verificationism and didn't see any accounts of such meaning. If there's a particular account of the meaning of external reality that you'd like me to look at, let me know.

I responded to the explain argument elsewhere. Regardless, it's unclear how an incoherent claim can serve as an explanation of anything.

Replies from: TAG
comment by TAG · 2020-08-15T10:13:33.387Z · LW(p) · GW(p)

You’re entitled to put whatever you want in your value system, but if it references incoherent things then you just can’t achieve it

Well, if it's rational to only value things that are really achievable, you need a concept of reality .

I’m talking specifically about the meaning of external reality

Its where your sense data come from. You need the idea of an external world to define "sense organs".

Replies from: ike
comment by ike · 2020-08-15T15:03:16.755Z · LW(p) · GW(p)

Incoherent != Impossible. It's not possible for me to get a unicorn, but it's coherent for me to want one. It's incoherent for me to want to both have a unicorn and not have a unicorn at the same time. It's incoherent for me to want the "true map" to have an XML tag saying "this really exists", because the concept of a "true map" is meaningless, and the map with the tag and the map without yield the same predictions and so neither is more correct.

Its where your sense data come from. You need the idea of an external world to define "sense organs".

Disagree. If you're defining it as the source of sense data, that's roughly equivalent to the definition in The Simple Truth essay I linked to and responded to in OP.

My sense data can be predicted well by my map. Why do I need to define a concept of external reality to make sense of that?

Replies from: TAG
comment by TAG · 2020-08-16T10:11:14.611Z · LW(p) · GW(p)

My sense data can be predicted well by my map. Why do I need to define a concept of external reality to make sense of that

You need it to explain why science works at all.

To summarise:

  1. There's a definition of "external reality": it's where your sense data come from.

  2. There is a purpose served by the posit of an external reality: explaining how science works .

  3. There is a meaning of "true model", based on correspondence.

So every one of your detailed objections has been answered.

Replies from: ike
comment by ike · 2020-08-16T15:03:12.989Z · LW(p) · GW(p)

>You need it to explain why science works at all.

Why? How does realism explain why science works? What is the exact argument, premises, and conclusion?

>There's a definition of "external reality": it's where your sense data come from.

This definition is incoherent, and I addressed it directly in OP. It's also circular - it assumes your sense data comes from somewhere, which is precisely what I'm claiming is incoherent. Giving a definition that assumes the matter in debate is begging the question.

>There is a purpose served by the posit of an external reality: explaining how science works .

I still don't know how your proposed explanation works. I think any such explanation can be replaced with an equally valid explanation that doesn't assume incoherent concepts such as realism. And even if you managed to show that, all you'd be showing is that the concept is useful, which neither implies coherency nor truth.

>There is a meaning of "true model", based on correspondence.

This again relies on the assumption of realism, which is circular.

Replies from: TAG
comment by TAG · 2020-08-17T10:36:09.993Z · LW(p) · GW(p)

What is the exact argument, premises, and conclusion?

Are you going to hold yourself to same standard?

This definition is incoherent

Exactly and precisely how?

it assumes your sense data comes from somewhere

That's what "sense data" means.

This again relies on the assumption of realism, which is circular

Circular "definitions* are ubiquitous. Circular arguments are the problem .

Replies from: ike
comment by ike · 2020-08-17T15:43:49.882Z · LW(p) · GW(p)

I've already given a definition of incoherent. A belief is coherent if it constrains expectations. A definition is incoherent if it relies on incoherent beliefs.

Are you going to hold yourself to same standard?

You're making three claims: one, that realism is coherent; two, that realism explains how science works; three, that one can't explain science as easily without realism.

My claim is that realism is incoherent, and that whatever argument you have for your second claim, I can find one just as good without using realism.

I only asked you to provide details on the second claim. If you do, I will hold myself to the same standards when arguing against the third. I can't give a precise argument to explain science without realism now, because I don't know what you want in an explanation until I've seen your argument.

That's what "sense data" means.

So reality is defined in terms of sense data, and sense data is defined in terms of reality. No, that doesn't work.

Replies from: TAG
comment by TAG · 2020-08-17T16:00:24.987Z · LW(p) · GW(p)

The definition you are now giving:

A belief is coherent if it constrains expectations.

Isnt't the same as the one you gave before:

A binary attribute is coherent if it’s possible for the attribute to hold as well as possible for the attribute not to hold

But that would certainly explain why seem to have been using "incoherent" and "meaningless" interchangeably.

Replies from: ike
comment by ike · 2020-08-17T17:46:46.754Z · LW(p) · GW(p)

Binary attributes aren't the same as beliefs.

comment by Jay Molstad (jay-molstad) · 2020-08-14T21:24:18.879Z · LW(p) · GW(p)

On a deductive level, verificationism is self-defeating; if it's true then it's meaningless. On an inductive level, I've found it to be a good rule of thumb for determining which controversies are likely to be resolvable and which are likely to go nowhere.

Replies from: ike
comment by ike · 2020-08-14T22:13:47.968Z · LW(p) · GW(p)

Verificationism is an account of meaning, not a belief in and of itself. It's not self-defeating.

Regardless, my form is restricted to denying that statements of the sort "an external reality exists/doesn't exist" are meaningful - none of the claims I've made are of that sort, so they're not meaningless on my terms.

Replies from: TAG, jay-molstad
comment by TAG · 2020-08-15T10:48:47.161Z · LW(p) · GW(p)

What verification means is that some statements are meaningless because they do not constrain expected experience. You have clearly subscribed to that view in the past .

That verification principle is indeed self defeating ,which is why historically, verificationists adopted analytical truth as a separate category that could be used to justify the verification procedure . You have implicitly done that yourself, where you argued that it was true by definition.

If your claim is now that claims about the external world are meaningless for some reason other than the verification principle...well, what is it.

Replies from: ike
comment by ike · 2020-08-15T15:11:45.848Z · LW(p) · GW(p)

What verification means is that some statements are meaningless because they do not constrain expected experience

Which statements? My claim is that beliefs are meaningless insofar as they don't constrain expectations, with a carve out for mathematical claims. Alternatively, we can restrict the set to only beliefs about external reality that are declared meaningless. Either way, verificationism isn't considered meaningless.

If there's a version of the verificationism principle that was historically shown to be self defeating, I don't see how it's relevant to mine. I looked up the various critiques, and they're addressing claims I never made.

Replies from: Chris_Leong
comment by Chris_Leong · 2020-09-17T09:06:53.369Z · LW(p) · GW(p)

Excluding the verification principle from itself feels like a dodge me. I don't think labelling it as not a belief gets you anywhere.

Replies from: ike
comment by ike · 2020-09-17T15:51:54.382Z · LW(p) · GW(p)

My version of the verification principle is explicitly about ontological claims. And that principle is not itself an ontological claim.

I don't really like the way the verification principle is phrased. I'm referencing it because I don't want to coin a new name for what is definitely a verificationist worldview with minor caveats.

Replies from: Chris_Leong, TAG
comment by Chris_Leong · 2020-09-18T00:22:41.860Z · LW(p) · GW(p)

Ok, so you're only applying the verification principle to ontological claims. That is quite a bit crisper than what you wrote before. On the other hand, the verification principle does feel like an ontological claim as it is claiming that certain things don't exist or at least that talking about them is meaningless. But how are you defining ontological?

Replies from: ike
comment by ike · 2020-09-18T00:32:29.725Z · LW(p) · GW(p)

I do have the disclaimer in the middle of OP, but not upfront, to be fair.

>the verification principle does feel like an ontological claim as it is claiming that certain things don't exist or at least that talking about them is meaningless

These are very different things.

>how are you defining ontological

Claims of the sort "X exists" or synonyms like "X is real", when intended in a deeper sense than more colloquial usage (e.g. "my love for you is real" is not asserting an ontological claim, it's just expressing an emotion, "the stuff you see in the movies isn't real" is also not an ontological usage, "the tree in the forest exists even when nobody's looking at it" is an ontological claim, as well as "the past really happened". )

(Note that I view "the tree in the forest exists when people are looking at it" as just as meaningless - all there is is the experience of viewing a tree. Our models contain trees, but that's a claim about the territory, not the model.)

comment by TAG · 2020-09-17T17:15:42.163Z · LW(p) · GW(p)

"Beliefs are meaningless unless they constrain expectations" and "beliefs are meaningless if they are about ontology" don't mean the same thing. The verificationist principle isn't about ontology, on the one hand, but still doesn't constrain expectations, on the other.

Replies from: ike
comment by ike · 2020-09-17T18:50:52.969Z · LW(p) · GW(p)

I only apply my principle to ontological statements, as explained in OP. And ontological statements never constrain expectations. So they are equivalent under these conditions.

Replies from: TAG
comment by TAG · 2020-09-17T20:11:57.171Z · LW(p) · GW(p)

But you shouldn't apply your beliefs to ontological statements . If the problem with ontological statements is that they don't constrain beliefs, it's unreasonable to except other statements that don't constrain beliefs.

Replies from: ike, ike
comment by ike · 2020-09-17T22:01:10.041Z · LW(p) · GW(p)

My problem with ontological statements is they don't appear to be meaningful.

Don't confuse the historical verification principle with the reasons for believing it. Those reasons apply to ontological statements and not to other statements.

Replies from: TAG
comment by TAG · 2020-09-18T10:07:56.317Z · LW(p) · GW(p)

My problem with ontological statements is they don’t appear to be meaningful.

You certainly started by making a direct appeal to your own intuition. Such an argument can be refuted by intuiting differently.

Those reasons apply to ontological statements and not to other statements.

You don't have any systematic argument to that effect. Other verificationist s might, but you don't.

There's a tradition of justifying the verification principle as an analytical truth, for instance. Your rr invention of verificationism is worse than the original .

Replies from: ike
comment by ike · 2020-09-18T14:58:58.142Z · LW(p) · GW(p)

You certainly started by making a direct appeal to your own intuition. Such an argument can be refuted by intuiting differently.

I've made a number of different arguments. You can respond by taking ontological terms as primitive, but as I've argued there's strong reasons for rejecting that.

You don't have any systematic argument to that effect

Of course I do. Every one of the arguments I've put forward clearly applies only to the kinds of ontological statements I'm talking about. If an argument I believed was broader, then I'd believe a broader class of statements was meaningless. If you disagree, which specific argument of mine (not conclusion) doesn't?

I'm not interested in analytical definitions right now. That's how Quine argued against it and I don't care about that construction.

Replies from: TAG
comment by TAG · 2020-09-18T17:41:50.926Z · LW(p) · GW(p)

You can respond by taking ontological terms as primitive,

That's not what I said. I said that you made a claim based on nothing but intuition, and that a contrary claim based on nothing but intuition is neither better nor worse than it.

Every one of the arguments I’ve put forward clearly applies only to the kinds of ontological statements

The argument that if it has no observable consequences, it is meaningless does not apply to only ontological statements.

Replies from: ike
comment by ike · 2020-09-18T19:33:18.177Z · LW(p) · GW(p)

> said that you made a claim based on nothing but intuition

This isn't true - I've made numerous arguments for this claim not purely based on intuition.

>The argument that if it has no observable consequences, it is meaningless does not apply to only ontological statements.

I did not make this argument. This is a conclusion that's argued for, not an argument, and the arguments for this conclusion only apply to ontological statements.

Replies from: TAG
comment by TAG · 2020-09-21T17:46:07.108Z · LW(p) · GW(p)

This isn’t true—I’ve made numerous arguments for this claim not purely based on intuition.

I didn't say that the only argument you made was based on intuition. I said that you made an a argument based on intuition, ie. one of your arguments was.

the arguments for this conclusion only apply to ontological statements.

Why? Because your intuition doesn't tell you that an undecidable statement is meaningless unless it is ontological?

Well, maybe it doesn't , after all anyone can intuit any thing. That's the problem with intuition.

The early verificationist had a different problem: they argued for the meaninglessness of metaphysical statements systematically , but ran into trouble when the verificationist principle turned out be meaningless in its own terms.

Replies from: ike
comment by ike · 2020-09-21T19:57:33.840Z · LW(p) · GW(p)

Why? Because your intuition doesn't tell you that an undecidable statement is meaningless unless it is ontological?

No, because the specific arguments only work for ontological statements. E.g. the multiverse argument only works for the subset of ontological claims that are true in only some worlds.

Replies from: TAG
comment by TAG · 2020-09-22T08:24:53.279Z · LW(p) · GW(p)

The multiverse argument is

  1. Ontological propositions are unverifiable

  2. Unverifiable propositions are meaningless.

2 would apply to any unverifiable statement.

Replies from: ike
comment by ike · 2020-09-22T15:55:00.910Z · LW(p) · GW(p)

No, I never took 1 or 2 as a premise. Read it again.

comment by ike · 2020-09-17T22:00:28.960Z · LW(p) · GW(p)

My problem with ontological statements is they don't appear to be meaningful.

Don't confuse the historical verification principle with the reasons for believing it. Those reasons apply to ontological statements and not to other statements.

Replies from: TAG
comment by TAG · 2020-09-22T08:36:00.597Z · LW(p) · GW(p)

"appear to be"

Replies from: ike
comment by ike · 2020-09-22T15:56:03.270Z · LW(p) · GW(p)

Yes, in many ways, with extended arguments. What exactly is your issue?

Replies from: TAG
comment by TAG · 2020-09-23T10:19:25.194Z · LW(p) · GW(p)

Appeal to personal intuition.

Replies from: habryka4, ike
comment by habryka (habryka4) · 2020-09-23T18:04:46.816Z · LW(p) · GW(p)

Yeah, I don't know. Don't take this as a moderator warning (yet), but usually when discussions reach the "one-sentence accusation of fallacy" stage it's usually best to disengage. I haven't had time to read this whole thread to figure out exactly what happened, but I don't want either of you to waste a ton of time in unproductive discussion.

comment by ike · 2020-09-23T15:27:26.623Z · LW(p) · GW(p)

I don't believe I've done that. 

comment by Jay Molstad (jay-molstad) · 2020-08-16T23:22:26.175Z · LW(p) · GW(p)

I'm not quite sure what you're going for with the distinction between an "account of meaning" and a "belief". It seems likely to cause problems elsewhere; language conveys meanings through socially-constructed, locally-verifiable means. A toddler learns from empirical experience what word to use to refer to a cat, but the word might be "kitty" or "gato" or "neko" depending on where the kid lives.

In practice, I suspect it more or less works out like my "inductive rule of thumb".

comment by Gordon Seidoh Worley (gworley) · 2020-08-13T18:25:15.121Z · LW(p) · GW(p)

General assessment: valid critiques but then you go and make your own metaphysical claims in exactly the opposite direction, missing the point of your own analysis.

Replies from: ike
comment by ike · 2020-08-13T18:41:28.906Z · LW(p) · GW(p)

I'm not making any metaphysical claims - I'm asserting such claims are meaningless. Can you elaborate?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-08-13T18:46:19.552Z · LW(p) · GW(p)

Claiming that they are meaningless is also making a claim that there is no there there to make claims about, and implies a metaphysics where there is a causal disconnect between perception and the perceived.

Replies from: ike
comment by ike · 2020-08-13T18:56:50.445Z · LW(p) · GW(p)

I don't see why claiming that the concept of "there there" is meaningless implies that "there is no there there"? I disagree with the latter statement precisely because I think it's meaningless.

>causal disconnect between perception and the perceived.

I'm not sure what claim you're referencing here, or why it follows from mine.

comment by Al Truist (al-truist) · 2020-08-13T01:09:49.429Z · LW(p) · GW(p)

Huh, upon reflection I can't figure out a good way to define reality without referring to subjective experience. I might not go so far as to say it's not a coherent concept, but you raise some interesting points.

Replies from: Signer
comment by Signer · 2020-08-14T19:42:19.165Z · LW(p) · GW(p)

That's because "subjective experience" and "reality" are the same thing - panpsychism solves the Hard Problem and provides some intuitions for what "reality" means.

comment by Chris_Leong · 2020-09-17T09:14:30.787Z · LW(p) · GW(p)

Thanks for writing up an excellent Reduction Ad Absurdum of verificationism. As they say, "One man's modus ponens is another man's modus tollens".

I strongly agree with the claim that it is self-defeating. Here's another weird effect - let's suppose I roll a dice and see that it is a 6. I then erase the information from my brain, which then takes us to a position where the statement is impossible to verify. Does the statement then become meaningless?

Beyond this, I would say that "exists" is a primitive. If it makes sense to take anything as a primitive, then it makes sense to take "exists" as a primitive. And the thing with primitives is that you can't really define them in any satisfactory sense. Instead, you can only talk around them.

Replies from: ike
comment by ike · 2020-09-17T15:48:59.488Z · LW(p) · GW(p)

It's only self-defeating if you aren't careful about definitions. Which admittedly, I haven't been here. I'm writing a blog post exploring a topic, not an academic paper. I'd be glad to expand a bit more if you would point to a specific self-defeater.

>Here's another weird effect - let's suppose I roll a dice and see that it is a 6. I then erase the information from my brain, which then takes us to a position where the statement is impossible to verify. Does the statement then become meaningless?

Yes, it's irrelevant to any future predictions.

Re exist being primitive: it's a weird primitive that

1. Is entirely useless for any kind of prediction

2. Is entirely impossible to obtain evidence about even in theory

3. Appears to break down given plausible assumptions about the mere possibility of a multiverse

If anything, I think "exist is a primitive" is self-defeating, given 3. And it's useless for purposes people tend to use it for, given 1 and 2.

Do you think there's a "fact of the matter" as to which branch of the level III multiverse we're in? What about levels I, II, and IV?

Replies from: Chris_Leong
comment by Chris_Leong · 2020-09-18T00:28:51.265Z · LW(p) · GW(p)

I didn't mention a specific self-defeater as that's been discussed in the comments above.

  1. Denying the existence of a deeper, unobservable reality or saying that speaking about it is nonsense is also useless for any kind of prediction
  2. The Universe Doesn't Have to Play Nice [LW · GW] captures my objections to this kind of reasoning
  3. Just because it is convenient to use exists in a way that refers to a particular scope of a multiverse, doesn't prevent us as treating the whole multiverse as just a rather unusual universal and using the term exists normally. But aren't claims about a multiverse inconsistent with your strong verificationism? 
Replies from: ike
comment by ike · 2020-09-18T02:02:08.286Z · LW(p) · GW(p)

>I didn't mention a specific self-defeater as that's been discussed in the comments above.

I've responded to each comment, Which argument do you think has not been sufficiently responded to?

>1. Denying the existence of a deeper, unobservable reality or saying that speaking about it is nonsense is also useless for any kind of prediction

You're the one saying we should treat this notion as primitive. I'm not arguing for taking verificationism as an axiom, but as a considered and argued for conclusion. You obviously need far stronger arguments for your axioms/primitives than your argued conclusions.

>2. The Universe Doesn't Have to Play Nice

It seems like there's a great deal of agreement in that post. You concede that there's no way to obtain evidence against Boltzmann or any knowledge about realism, agreeing with my 1 and 2 above (which are controversial in philosophy.) I don't see what part of that post has an objection to the kind of reasoning here. I'm not saying the universe must play nice, I'm saying it's odd to assert a primitive under these conditions.

>Just because it is convenient to use exists in a way that refers to a particular scope of a multiverse, doesn't prevent us as treating the whole multiverse as just a rather unusual universal and using the term exists normally.

This would be consistent with my argument here, which is about claims that are true in some parts of the multiverse and false in others. If you retreat to the viewpoint that only statements about the multiverse can use the term exist, then you should still agree that statements like "chairs exist in our world" are meaningless.

>aren't claims about a multiverse inconsistent with your strong verificationism? 

I think I live in a level IV multiverse, and the sense I mean this in is that my probability expectations are drawn from that multiverse conditioned on my current experience. It's entirely a statement about my expectations. (I also have a small probability mass on there being branches outside level IV with uncomputable universes, such as ones containing halting oracles.) I think this is meaningful but "the level IV multiverse actually exists" is not.

comment by Teerth Aloke · 2020-08-15T07:53:34.574Z · LW(p) · GW(p)

My concept of a meaningless claim is a claim that can be substituted for any alternative without any change to anticipated experience. For example, the claim 'Photon does not exist after reaching the Event Horizon' can be substituted for the claim 'Photon exists after crossing the Event Horizon' without bringing any change to anticipated experience. Thus, it is not rational to believe in any of the alternatives. What is your practical definition of 'meaningless'?

Replies from: ike
comment by ike · 2020-08-15T15:15:33.461Z · LW(p) · GW(p)

This is fine, but notice that "photon never existed at all" is also perfectly consistent and doesn't change your anticipation, as long as "I will have experiences as predicted by this map that contains a photon" is assumed.

In general, "map X predicts my experiences" and "map X predicts my experiences and in addition map X is true/X exists in reality/etc" have exactly the same predictions, and my claim is that the former is simpler and that the latter is incoherent. No need to go past the event horizon.

Replies from: Teerth Aloke
comment by Teerth Aloke · 2020-08-16T03:44:19.742Z · LW(p) · GW(p)

But how can I believe that the photon-containing map predicts my experiences without implicitly believing in the existence of photons? Why should I believe in the success of the Photon-Containing map without believing in the existence of photons?

Replies from: ike
comment by ike · 2020-08-16T04:03:00.809Z · LW(p) · GW(p)

All models are wrong, some models are useful.

The soft sciences use models for prediction all the time without believing that the model reflects reality.


>Why should I believe in the success of the Photon-Containing map without believing in the existence of photons?

Because doing that in the past has worked out well and led to successful predictions. You don't actually need to assume realism.

Replies from: jay-molstad
comment by Jay Molstad (jay-molstad) · 2020-08-16T23:29:13.244Z · LW(p) · GW(p)

More to the point, the models that contain photons that behave "realistically" sometimes lead to unsuccessful predictions (e.g. the double-slit experiment), and models that consistently give successful predictions include photon behavior that seems "unreal" to human intuition (but corresponds to experimentally-observed reality).

comment by ChristianKl · 2020-08-13T12:35:08.295Z · LW(p) · GW(p)

It seems like you are both arguing that reality exist is false and that it's meaningless. It's worth keeping those apart.

When it comes to meaningness of terms EY wrote a more about truth then reality. He defends the usefulness of the term truth by asking

If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'? 

He finds that he has good reasons to answer yes. In a similar regard it might be useful to tell an AI that has a notion that there's a reality outside of itself that's distinct from anything that AI knows and not directly assessible by the AI. It's not necessary for the existince of reality to be verifiable for the AI for it being a good idea for the creator of the AI that knows something about reality to teach the AI about it.

Replies from: ike
comment by ike · 2020-08-13T13:46:45.925Z · LW(p) · GW(p)

Yes, I noted throughout the post that those are distinct. In the EY posts I'm responding to, he argues against both of those views separately. I am not arguing that the claim is false, I'm only arguing it's meaningless. Not sure which part you think is arguing that it's false, I can edit to clarify?

Re truth, I went back and reread the post you're quoting from (https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs/p/XqvnWFtRD2keJdwjX [? · GW])

And so saying 'I believe the sky is blue, and that's true!' typically conveys the same information as 'I believe the sky is blue' or just saying 'The sky is blue' - namely, that your mental model of the world contains a blue sky.

I agree with this. My own argument is that saying "my map contains a spaceship, and that spaceship actually exists" conveys the exact same information as "my map contains a spaceship", and the second half of the statement is meaningless.

He argues that truth is a useful concept, which is not the same as arguing that it's meaningful. His example AI could just as well have a concept of the expected and observed errors of its various maps without having a concept of an external reality. I think all of the ideas he claims truth are required to express can be reformulated in a way that doesn't entail reality existing and remain just as useful. Solomonoff induction works without having a notion of "some maps are real."

Other than that, he repeats arguments in The Simple Truth and the other posts that I already responded to. You don't need to talk about an external reality in order to meaningfully talk about errors in your maps. You can just talk about the maps.

Replies from: ChristianKl
comment by ChristianKl · 2020-08-13T16:36:21.092Z · LW(p) · GW(p)

He argues that truth is a useful concept, which is not the same as arguing that it's meaningful.

The term meaningful doesn't appear in the post about beliefs paying rent. It's unclear to me why it should be more important then whether a concept is useful for the base level.

His example AI could just as well have a concept of the expected and observed errors of its various maps without having a concept of an external reality.

Actors that try to minimze observed errors instead of aligning with external reality can easily minimize that error by introducing a bias in their observation where the bias makes the observation different from reality. It's useful to have a concept that such biases exist.

Replies from: ike
comment by ike · 2020-08-13T19:17:13.662Z · LW(p) · GW(p)

Sorry, missed this comment earlier.

>The term meaningful doesn't appear in the post about beliefs paying rent. It's unclear to me why it should be more important then whether a concept is useful for the base level.

If a concept is meaningless, but you continue to use it because it makes your mental processes simpler or is similarly useful, that's a state of affairs that should be distinguished from the concept being meaningful. I'm not comparing importance here.

>Actors that try to minimze observed errors instead of aligning with external reality can easily minimize that error by introducing a bias in their observation where the bias makes the observation different from reality. It's useful to have a concept that such biases exist.

I'm not sure what you mean by this bias? Can you give an example?

Replies from: ChristianKl, TAG
comment by ChristianKl · 2020-08-13T21:20:26.667Z · LW(p) · GW(p)

If it's 24 degrees C in reality and my thermometer observes that it's 22 degrees C then the thermometer has a bias.

Replies from: ike
comment by ike · 2020-08-13T22:00:34.248Z · LW(p) · GW(p)

It's ironic you picked that example, because temperature is explicitly socially constructed. There's a handful of different definitions, but they're all going to trace back to some particular set of observations.

Anyway, I'm interpreting your statement that some other set of thermometers will show 24C. I don't know what you mean by "it's 24 degrees C in reality", other than some set of predictions about what thermometers will show, or how fast water will boil/freeze, etc.

The bias can be conceptualized as the thermometer consistently showing a lower degree than other thermometers. Why is a concept of "reality" useful, above and beyond that?

Replies from: ChristianKl
comment by ChristianKl · 2020-08-14T20:17:12.102Z · LW(p) · GW(p)

It might be ironic if you abuse the terms mind and territory in a way to just rehash dualism instead of the way it was intended in Science and Sanity. There are more layers of abstraction here then just two. 

other than some set of predictions about what thermometers will show, or how fast water will boil/freeze, etc. 

So you think the tree that falls in the forest without someone to hear it doesn't meaningfully make a sound?

The bias can be conceptualized as the thermometer consistently showing a lower degree than other thermometers.

Then you have to spend a lot of time thinking about what other thermometers you are talking about. You do get into problems in cases where the majority of measurements of a given thing share a measurement bias. 

You are not going to reason well about a question like Are Americans Becoming More Depressed? [LW · GW]if you treat the subject as not being about an underlying reality. 

Replies from: ike
comment by ike · 2020-08-14T20:49:37.987Z · LW(p) · GW(p)

>So you think the tree that falls in the forest without someone to hear it doesn't meaningfully make a sound?

Worse, I don't think trees meaningfully fall in forests that nobody ever visits.

>You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.

I don't know that that's meaningful. Measurement is a social construct. If every thermometer since they were first invented had a constant 1 degree bias, there wouldn't be a bias, our scale would just be different. It's as meaningless as shifting the entire universe one foot to the left is. Who is to say that the majority is wrong and a minority is correct? And if there is some objective way to say that, then we can define the bias in terms of that objective way, like if we defined it in relation to some particular thermometer that's declared to be perfect (not unlike how some measurements were actually defined for some time).

>You are not going to reason well about a question like Are Americans Becoming More Depressed? [LW · GW]if you treat the subject as not being about an underlying reality. 

I mean, surely you see how questions like that might not be terribly meaningful until you operationalize it somehow? And as I've said, my theory does not differ in predictive ability, so if I'm reasoning worse in some respect but I get all the same predictions, what's wrong?

comment by TAG · 2020-08-14T10:24:41.587Z · LW(p) · GW(p)

If a concept is meaningless, but you continue to use it because it makes your mental processes simpler or is similarly useful, that’s a state of affairs that should be distinguished from the concept being meaningful

Or is useful for communicating ideas to other people..but, hang on, how can a meaningless concept be useful for communication? That just breaks the ordinary meaning of "meaning".

Replies from: ike
comment by ike · 2020-08-14T15:49:09.846Z · LW(p) · GW(p)

EY argues that a particular literary claim can be meaningless, and yet you can still have a test and get graded based on your knowledge of that statement. This is in the posts that I linked at the very top of my post. Do you disagree with that claims, i.e. you think those claims are actually meaningful?

Replies from: TAG, ChristianKl
comment by TAG · 2020-08-16T10:28:17.048Z · LW(p) · GW(p)

That sort of thing is testable. You get a bunch of literary professors in a Septuagint scenario, where they have to independently classify books as pre or post colonial. Why would that be impossible? It's evidently possible to perform an easier version of the test where books are classified as romance, horror or Western.

That would be evidence. EYs personal opinion is not evidence , nor is yours.

Replies from: ike
comment by ike · 2020-08-16T14:52:17.657Z · LW(p) · GW(p)

Certainly if you get a bunch of physicists in a room they will disagree about what entities are real. So according to your proposed test, physics isn't real?

Replies from: TAG
comment by TAG · 2020-08-17T11:10:21.586Z · LW(p) · GW(p)

As I have explained, the argument for realism in general is not based on a particular theory being realistically true

Replies from: ike
comment by ike · 2020-08-17T15:46:49.148Z · LW(p) · GW(p)

You ignored my question.

comment by ChristianKl · 2020-08-16T11:42:27.107Z · LW(p) · GW(p)

There's a huge difference between saying that a particular literay claim can be meaningless and saying that all of those claims are meaningless.

You can take the Sokal episode as an argument that if someone who doesn't have the expert knowledge can easily pass as an expert then those experts don't seem to have a lot of meaningful knowledge. 

Different claims in that tradition are going to have a different status. 

Replies from: ike
comment by ike · 2020-08-16T14:51:29.978Z · LW(p) · GW(p)

I'm not saying that all literary claims are meaningless. I'm saying all ontological claims are meaningless. Regardless, I'm responding to a comment which implied meaningless concepts cannot be useful for communication.

comment by Benjy Forstadt (benjy-forstadt-1) · 2020-10-17T02:54:44.990Z · LW(p) · GW(p)

A couple thoughts:

I think of explanations as being prior to predictions. The goal of (epistemic) rationality, for me, is not to accurately predict what future experiences I will have. It’s to come up with the best model of reality that includes the experiences I’m having right now.

I’ve even lately come to be skeptical of the notion of anticipated experience. In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things. There are substitute notions that play the role of beliefs about future experiences, but they aren’t the same thing.

Experiences are physical processes in the world. If you have beliefs about your future experiences, then you must either have beliefs about physical processes, or you have to be a dualist. If you don’t have beliefs about experiences, but instead just “anticipate” them, then idk.

There is an odd question that I think about sometimes - why does “exist” refer to actual real world existence, as opposed to something like “exists according to Max Tegmark”, or “exists inside the observable universe”? I’m taking for granted that there is a property of existence - the question is, how do we manage to pick it out? There are a couple ideas off the top of my head that could answer this question. Reference magnetism is the idea that some properties are so special that they magically cause our words to be about them. Existence is pretty special - it’s a property that everything has. Alternatively, maybe “exist” acquires its meaning somehow through our interaction with the physical world - existent things are those things that I can causally interact with. This option has the “benefit” of ruling out the possibility of causally isolated parallel worlds.

Replies from: ike
comment by ike · 2020-10-17T03:25:44.517Z · LW(p) · GW(p)

In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things.

Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes "actually exist" is meaningless, though, it's all just models. 

It’s to come up with the best model of reality that includes the experiences I’m having right now.

Why? You'll end up with many models which fit the data, some of which are simpler, but why is any one of those the "best"? 

Experiences are physical processes in the world.

Disagree that this statement is cognitively meaningful. 

Replies from: benjy-forstadt-1
comment by Benjy Forstadt (benjy-forstadt-1) · 2020-10-17T06:00:31.406Z · LW(p) · GW(p)

Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes "actually exist" is meaningless, though, it's all just models.

So the idea is that you’re taking a percentage of the yous that exist across all possible models consistent with the data? Why? And how? I can sort of understand the idea that claims about the external world are meaningless in so far as they don’t constrain expectations. But now this thing we’ve been calling expectations is being identified with a structure inside the models whose whole purpose is to constrain our expectations. It seems circular to me.

It didn’t have to be this way, that the best way to predict experiences was by constructing models of an external world. There are other algorithms that could have turned out to be useful for this. Some people even think that it didn’t turn out this way, and that quantum mechanics is a good example.

Why? You'll end up with many models which fit the data, some of which are simpler, but why is any one of those the "best"?

I care about finding the truth. I think I have experiences, and my job is to use this fact to find more truths. The easiest hypotheses for me to think of that incorporate my data make ontological claims. My priors tell me that something like simplicity is a virtue, and if we’re talking about ontological claims, that means simpler ontological claims are more likely. I manage to build up a really large and intricate system of ontological claims that I hold to be true. At the margins, some ontological distinctions feel odd to maintain, but by and large it feels pretty natural.

Now, suppose I came to realize that ontological claims were in fact meaningless. Then I wouldn’t give up my goal of finding truths, I would just look elsewhere, maybe at logical, or mathematical, or maybe even moral truths. These truths don’t seem adequate to explain my data, but maybe I’m wrong. They are also just as suspicious to me as ontological truths. I might also look for new kinds of truths. I think it’s definitely worth it too try and look at the world (sorry) non-ontologically.

Replies from: ike
comment by ike · 2020-10-17T11:49:38.573Z · LW(p) · GW(p)

The how is Solomonoff induction, the why is because it's historically been useful for prediction.

I don't believe programs used in Solomonoff are "models of an external world", they're just models.

Re simplicity, you're conflating a mathematical treatment of simplicity that justifies the prior and where ontological claims aren't simple, with a folk understanding of simplicity in which they are. Or at least you're promoting the folk understanding over the mathematical understanding.

If you understand how Solomonoff works, are you willing to defend the folk understanding over that?

comment by Mati_Roy (MathieuRoy) · 2020-09-01T21:25:21.840Z · LW(p) · GW(p)

you're an anti-realist about reality :o :P

comment by Mati_Roy (MathieuRoy) · 2020-09-01T21:24:17.342Z · LW(p) · GW(p)

side note (not addressing the main point)

it's not like there's a great altruism argument if only we conceded that verificationism is wrong

altruism is a value, not a belief. do you think there's no great argument for why it's possible in principle to configure a mind to be altruistic?

Replies from: ike
comment by ike · 2020-09-01T22:53:17.937Z · LW(p) · GW(p)

Altruism is a preference. On my view, that preference is just incoherent, because it refers to entities that are meaningless. But even without that, there's no great argument for why anyone should be altruistic, or for any moral claims.

I don't think it's possible in principle to configure a mind to pursue incoherent goals. If it was accepted to be coherent, then it would be possible.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2020-09-02T02:41:06.097Z · LW(p) · GW(p)

I grant that altruism is (seems) incoherent if the existence of other minds is incoherent. But if 'Strong Verificationism' is wrong, and Eliezer is right, then it seems obviously possible to create a mind that cares about other minds, no?

there's no great argument for why anyone should be altruistic, or for any moral claims.

there are great arguments for why it's possible to design an altruistic mind. a mind with altruistic values will generally be more likely to achieve the altruistic values if ze has / keeps them, and vice versa. do you disagree with that?

Replies from: ike
comment by ike · 2020-09-02T04:04:36.190Z · LW(p) · GW(p)

I'm not sure. I think even if the strong claim here is wrong and realism is coherent, it's still fundamentally unknowable, and we can't get any evidence at all in favor. That might be enough to doom altruism.

It's hard for me to reason well about a concept I believe to be incoherent, though.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2020-09-02T05:34:18.654Z · LW(p) · GW(p)

AFAIU, under strong(er?) verificationism, it's also incoherent to say that your past and future selves exist. so all goals are doomed, not just altruistic ones.

alternatively, maybe if you merge all the minds, then you can verify other minds exist and take care of them. plus, maybe different part of your brain communicating isn't qualitatively different from different brains communicating with each other (although it probably is).

Replies from: ike
comment by ike · 2020-09-02T13:15:32.867Z · LW(p) · GW(p)

I haven't written specifically about goals, but being that claims about future experiences are coherent, preferences over the distribution of such are also, and one can act on their beliefs about how their actions affect said distribution. This doesn't require the past to exist.

comment by Sunny from QAD (Evan Rysdam) · 2020-08-13T23:58:58.250Z · LW(p) · GW(p)

Yeah. This post could also serve, more or less verbatim, as a write-up of my own current thoughts on the matter. In particular, this section really nails it:

As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon "exists" is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.

[...]

I don't suppose that. I suppose that the concept of a photon actually existing is meaningless and irrelevant to the model.

[...]

This latter belief is an "additional fact". It's more complicated than "these equations describe my expectations".

And the two issues you mention — the spaceship that's leaving Earth to establish a colony that won't causally interact with us, and the question of whether other people have internal experiences — are the only two notes of dissonance in my own understanding.

(Actually, I do disagree with "altruism is hard to ground regardless". For me, it's very easy to ground. Supposing that the question "Do other people have internal conscious experiences?" is meaningful and that the answer is "yes", I just very simply would prefer those people to have pleasant experiences rather than unpleasant ones. Then again, you may mean that it's hard to convince other people to be altruistic, if that isn't their inclination. In that case, I agree.)

comment by TAG · 2020-08-13T17:32:22.291Z · LW(p) · GW(p)

So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”

Here, reality is merely a convenient term to use, which helps conceptualize errors in the map.

No, the point of the argument for realism, in general, is that it explains how prediction, in general, is possible.

That's different from saying that the predictive ability of a specific theory is good evidence for the ontological accuracy of a specific theory.

comment by TAG · 2020-08-13T17:31:44.688Z · LW(p) · GW(p)

So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”

Here, reality is merely a convenient term to use, which helps conceptualize errors in the map.

No, the point of the argument for realism in general is that it explains how prediction, in general, is possible.

That's different from saying that the predictive ability of a specific theory is good evidence for the ontological accuracy of a specific theory.

Replies from: ike
comment by ike · 2020-08-13T17:57:40.572Z · LW(p) · GW(p)

What's the point of having an explanation for why prediction works? I dispute that it's actually an explanation - incoherent claims can't serve as an explanation.

Replies from: ChristianKl, TAG
comment by ChristianKl · 2020-08-16T11:44:33.598Z · LW(p) · GW(p)

All the post on the usefulness of gears-models are about the point why having explanations is good. It generally helpful for engaging with a subject matter.

Replies from: ike
comment by ike · 2020-08-16T14:48:16.943Z · LW(p) · GW(p)

Not when the explanation is incoherent and adds nothing to predictive ability or ease of calculation. It's just an unnecessary assumption.

comment by TAG · 2020-08-14T10:27:39.558Z · LW(p) · GW(p)

What's the point of having an explanation of anything? We do things like science because we value explanations. It's a bit much to say that we should give up on science , and it's a bit suspicious that the one thing that doesn't need explaining is the one thing your theory can't explain.

Replies from: ike
comment by ike · 2020-08-14T15:51:46.446Z · LW(p) · GW(p)

Prediction, in general, is possible because Occam's razor works. As I said in a different comment, realism doesn't help explain Occam, and I am satisfied with EY's argument for grounding Occam which doesn't appear to require realism.

Replies from: TAG
comment by TAG · 2020-08-15T19:10:33.939Z · LW(p) · GW(p)

Prediction isn't based on occams razor alone: you need the unuvers, reality, the place where your data are coming from to play nice by having compressible patterns. Which means that you need external reality.

Replies from: ike
comment by ike · 2020-08-15T19:36:59.434Z · LW(p) · GW(p)

All you need is that Occam works well for predicting your observations. Doesn't require realism, that's an added assumption.

Replies from: TAG
comment by TAG · 2020-08-15T19:38:52.651Z · LW(p) · GW(p)

Reality is where observations come from.

Replies from: ike
comment by ike · 2020-08-15T20:05:53.333Z · LW(p) · GW(p)

Yes, that's the added assumption. It's neither required nor coherent.

comment by TAG · 2020-08-13T15:30:50.114Z · LW(p) · GW(p)

Is the thing you are defending verificationism, or anti-realism?

Here you say you argue for verificationism

. This post consists of two parts—first, the positive case for verificationism,

..here you argue "from* verificationism:-

One consequence of verificationism

If you are arguing for verificationism , you need to argue against main alternative theory -- that meaning is an essentially linguistic issue.

1 It’s impossible to reach that world from ours, it’s entirely causally disconnected, and

2 That world “really exists

That’s exactly what decoherent many worlds asserts!

Both sides of the many worlds debate appear to understand what "causally isolated worlds" means, the evidence for which is that they can actuallly communicate their disagreements about it.

If verificationism means that "exists" is a meaningless word, then "this territory does not exist" is a meaningless claim not a true claim.

For one, it’s not clear what it ["exists"] means

Anti realism needs key terms like "map","observer", "experience" and predict. Giving clear definitions is a problem everyone faces.

And it's hard to see it how it can get by without an ontology. Not only does "observer" need to be defined , observers need to exist. Strong anti realism seems to be incoherent.

Anr way to look at this is through the map-territory distinction. “X really exists” is a map claim (i.e a claim made by the map), not a territory claim, but it’s about the territory. It’s a category error and meaningless

What? What's a "map claim"?

Replies from: ike, ike
comment by ike · 2020-08-13T16:29:39.582Z · LW(p) · GW(p)

>Is the thing you are defending verificationism, or anti-realism?

The former. My position isn't so much anti-realism as it is that realism and anti-realism are both incoherent.

>..here you argue "from* verificationism:-

It's a bit tricky to make a positive case for a negative claim. I'm denying that some class of statements is meaningful. I do that by trying to draw a coherent picture of how one can do all the things we want to do without referring to these purported meaningless statements, and by pointing out that nobody has yet given or pointed at a reasonable account of meaning for those statements. I cite EY's account of meaning and show that it's not required.

>Both sides of the many worlds debate appear to understand what "causally isolated worlds" means, the evidence for which is that they can actually communicate their disagreements about it.

The fact that people can debate a topic does not imply that topic is meaningful. EY gave an example of that re literary claims, I just think it applies far more broadly.

>If verificationism means that "exists" is a meaningless word, then "this territory does not exist" is a meaningless claim not a true claim.

Correct. I just thought it was a snappy title. My claim is really that "this territory exists/does not exist" is incoherent. Maybe that's throwing people off?

>What? What's a "map claim"?

We have a map that says "X". If the map also says "X exists", that's not adding anything useful or meaningful to the map.

Replies from: TAG, TAG, TAG, TAG
comment by TAG · 2020-08-14T11:55:09.905Z · LW(p) · GW(p)

The fact that people can debate a topic does not imply that topic is meaningful

Yes it does, using the ordinary linguistic definition of meaning. There's no way of making the point without begging the question.

EY presents an example,post collonial literature that is meaningless according to his intuition. You present an example , causally non-interacting worlds, that is meaningless according to your intuition, but which he accepts!

Replies from: ike
comment by ike · 2020-08-15T04:18:51.233Z · LW(p) · GW(p)

>Yes it does, using the ordinary linguistic definition of meaning. There's no way of making the point without begging the question.

Well, if you explained to them what they were using language for, they would presumably disagree. Ask your friends whether, by talking about reality, they simply mean "an assumption that makes science simpler", or any other of the accounts you and others have offered in the comments, and I suspect most will say no. There's two possibilities: either their usage of reality/exist is meaningless, or they're mistaken about what it means. For my purposes, accepting either of these is a significant step towards my view.

>You present an example , causally non-interacting worlds

Remember, I think realism as applied to the world that does causally interact with us is also incoherent. My first example was just to warm things up.

Replies from: TAG
comment by TAG · 2020-08-15T09:49:17.684Z · LW(p) · GW(p)

Ask your friends whether, by talking about reality, they simply mean “an assumption that makes science simpler”,

Reality is an assumption that makes science comprehensible.

Replies from: ike
comment by ike · 2020-08-15T15:25:14.946Z · LW(p) · GW(p)

I don't think people would agree with this as an account of what they mean by reality. Calling it an assumption already conflicts with what most people intuitively "believe".

Replies from: TAG
comment by TAG · 2020-08-15T19:07:24.942Z · LW(p) · GW(p)

I thought your claim was that "reality" has no meaning.

Replies from: ike
comment by ike · 2020-08-15T19:35:46.589Z · LW(p) · GW(p)

It is. That doesn't conflict with what I said?

Replies from: TAG
comment by TAG · 2020-08-16T10:33:37.647Z · LW(p) · GW(p)

How can people have beliefs about reality if it has no meaning ?

Replies from: ike
comment by ike · 2020-08-16T14:53:29.894Z · LW(p) · GW(p)

Their beliefs are meaningless. The fact that they think their beliefs are meaningful doesn't change that. People can be wrong.

Replies from: TAG
comment by TAG · 2020-08-17T10:44:51.157Z · LW(p) · GW(p)

Their beliefs are meaningless

You haven't proven that , because you haven't proven that your novel theory of meaning is the only correct one.

Replies from: ike
comment by ike · 2020-08-17T15:46:09.020Z · LW(p) · GW(p)

It's not novel, the core idea is at least a century old.

You're right, I haven't proven it. I've challenged everyone to provide an alternative account of definition that makes it meaningful, and over a hundred comments later nobody has done so.

Replies from: TAG, TAG
comment by TAG · 2020-08-17T16:43:37.588Z · LW(p) · GW(p)

It’s not novel, the core idea is at least a century old.

Which is very young compared to the total history of philosophy and fairly young compared to modern linguistics.

comment by TAG · 2020-08-17T16:41:09.071Z · LW(p) · GW(p)

You’re right, I haven’t proven it. I’ve challenged everyone to provide an alternative account of definition that makes it meaningful, and over a hundred comments later nobody has done so.

The objections you keep making are that alternative suggestions are meaningless given your definition of meaningful. In other words, that they are wrong because you are right. That alternative accounts are wrong because they are different to the one true account.

To make progress you need a way of engaging that isn't question-begging.

Replies from: ike
comment by ike · 2020-08-17T17:49:12.797Z · LW(p) · GW(p)

No, I've objected that the alternative definitions are circular, and assume the coherency as part of the definition. That is a valid critique even without assuming that it's incoherent from the outset.

comment by TAG · 2020-08-14T11:23:42.017Z · LW(p) · GW(p)

It’s a bit tricky to make a positive case for a negative claim

Verification is also a positive claim. It indicates that whenever a new instrument is invented , a dark matter detector or a ghost detector, a set of statement that were previously meaningless become meaningful. In a way that is completely unrelated to their apparent comprehensibility and usefulness in communication.

Replies from: ike
comment by ike · 2020-08-15T04:21:31.176Z · LW(p) · GW(p)

>It indicates that whenever a new instrument is invented , a dark matter detector or a ghost detector, a set of statement that were previously meaningless become meaningful.

Disagree - none of those are new ways of observation. We can only observe through the senses. All of those instruments only affect us through our senses, and don't change the meaning of any statements.

Now, if we managed to get a sixth sense, perhaps I'd have to modify the theory, probably to talk about input into the brain instead of observations, since the brain is all we really have direct access to. And if you start messing with brain inputs, that's enough of an edge case I don't expect any theory to survive intact.

Replies from: TAG
comment by TAG · 2020-08-15T09:35:38.100Z · LW(p) · GW(p)

So a statement is meaningful if it could be supported by sensory evidence enhanced by any kind of instrument invented in the future?

Replies from: ike
comment by ike · 2020-08-15T15:24:07.695Z · LW(p) · GW(p)

A statement is meaningful if it constrains expectations. If you say "we will be able to directly measure dark energy in the future", that's meaningful - I can imagine a set of sense data that would satisfy that (peer reviewed paper gets published, machine gets built, paper released claiming to have measured it, etc).

Replies from: TAG
comment by TAG · 2020-08-15T15:55:41.734Z · LW(p) · GW(p)

But whether a sentence constrains expectations depends on what instrumentation is possible. We don't know that, which means we can't say now that any given sentence is verificationally meaningful. Which means that verificationism is incoherent by your definition.

Replies from: ike
comment by ike · 2020-08-15T17:00:17.817Z · LW(p) · GW(p)

Disagree. Either a sentence makes predictions about observations or it doesn't.

>Which means that verificationism is incoherent by your definition.

How so? As explained multiple times, verificationism is an account of meaning / definitional, which itself doesn't need to be verifiable.

>we can't say now that any given sentence is verificationally meaningful.

I don't agree with this, but even if I concede this, it doesn't imply that verificationism is incoherent. Nothing about my account of meaning implies that it should be tractable to tell whether sentences are meaningful in all cases. Regardless, can you give an example of a sentence that you think we can't know if it's meaningful on my definition? I suspect if phrased correctly it will either be clearly meaningful or clearly meaningless under my criteria, and any ambiguity will be based on phrasing. As above, "we will be able to detect dark energy one day" is coherent. "Dark energy is responsible for X observation" is coherent if understood as also incorporating the detection claim as well, but not otherwise. I assume you can see why "dark energy is responsible for X observation, but we'll never be able to detect it any better than we do now" is partially meaningless? It's equivalent to "dark energy is not responsible for X observation". The claim "dark energy is responsible", on its own, is not relevant to any predictions; the claim along with a claim about future detection abilities is.

Replies from: TAG
comment by TAG · 2020-08-15T18:57:18.566Z · LW(p) · GW(p)

Disagree. Either a sentence makes predictions about observations or it doesn’t

Very few sentences make explicit predictions. "Atoms exist" doesn't. So you either have to include implicit predictions or have a very small set of meaningful sentences. But implications of predicability depend on other claims and facts, which arent necessarily available. So the ordinary theory of meaning is better at explaining how communication works than the verificationist one. In particular, scientists need to hold discussions about how to detect some hypothetical particle or force before a detector is built.

Regardless, can you give an example of a sentence that you think we can’t know if it’s meaningful on my definition

Pretty much anything that is subject to philosophical dispute. "God exists" because there are interactive and non interactive (deistic) definitions of God. Mind body dualism , because it also come in Interactive and non interactive versions. Most novel physics.

Nothing about my account of meaning implies that it should be tractable to tell whether sentences are meaningful in all cases.

That is an implication of how have you defined "coherence".

Replies from: ike
comment by ike · 2020-08-15T19:34:38.111Z · LW(p) · GW(p)

>So the ordinary theory of meaning is better at explaining how communication works than the verificationist one.

Not sure what ordinary theory you're referring to.

> In particular, scientists need to hold discussions about how to detect some hypothetical particle or force before a detector is built.

Perfectly consistent with my view. To spell it out in my terms: someone proposes a revision to the Standard Map which includes new entities. It's initially unclear whether this revision produces different predictions (and therefore unclear if this revision is meaningful). Scientists discuss possible experiments that would come out differently based on this map, and if they figure one out, then they've determined that the revision is meaningful and can then test it.

I submit that the above *actually represents* how many physicists think about various formulations of QM that makes the same predictions, and how they treat new theories that initially don't appear to make new predictions.

>"God exists" because there are interactive and non interactive (deistic) definitions of God. Mind body dualism , because it also come in Interactive and non interactive versions. Most novel physics.

"God exists" can be understood as making predictions or not depending on which variant. It's alternatively meaningful or meaningless depending on which is meant. Honestly, I think many would agree that certain forms of that claim are meaningless, e.g. pantheism.

>That is an implication of how have you defined "coherence".

Elaborate?

Replies from: TAG, TAG
comment by TAG · 2020-08-15T20:03:35.397Z · LW(p) · GW(p)

“God exists” can be understood as making predictions or not depending on which variant.

So it's a robust example of a proposition whose meaningfulness is undecidable, as required.

It’s alternatively meaningful or meaningless depending on which is meant.

Which is to say that the proposition per se is undecidable.

Replies from: ike
comment by ike · 2020-08-15T20:08:21.300Z · LW(p) · GW(p)

It's not undecidable. It can be understood in multiple ways, some of which are meaningful, and some not.

If I'm teaching a class, and at some point say "the proposition which I just wrote on the board is correct", that statement will be meaningful or meaningless depending on what statement was written. Same for a proposition that refers to a "God" concept - if it's referring to versions of that concept which are meaningful, then it's meaningful.

Replies from: TAG
comment by TAG · 2020-08-15T21:03:14.996Z · LW(p) · GW(p)

But whether it's referring to a version of the concept that is "meaningful isn't clear.

You are now saying that every proposition is decideably meaningless/full , depending on some further facts about what it is "really" about...facts which may never become apparent.

Replies from: ike
comment by ike · 2020-08-15T22:56:22.460Z · LW(p) · GW(p)

>You are now saying that every proposition is decideably meaningless/full , depending on some further facts about what it is "really" about...facts which may never become apparent.

No, only incredibly vague statements like my example or like "God exists".

Replies from: TAG
comment by TAG · 2020-08-15T23:40:54.075Z · LW(p) · GW(p)

So.. previously you were saying that there is one kind of meaninglessness , consisting of unverifiability. Now you are saying that there is another kind, consisting of vagueness .

Replies from: ike
comment by ike · 2020-08-15T23:43:52.595Z · LW(p) · GW(p)

No, a statement can be vague about whether it refers to meaningful statement A or meaningless statement B. That statement as a whole is unverifiable because it's unclear which it refers to.

comment by TAG · 2020-08-15T19:56:56.913Z · LW(p) · GW(p)

Not sure what ordinary theory you’re referring to

Presence of shared meaning enabled communication.

Absence of share meaning prevents communication.

Basically, meaning is about communication.

Scientists discuss possible experiments that would come out differently based on this map, and if they figure one out, then they’ve determined that the revision is meaningful and can then test it.

And if it turns out that the new particle or force is indetectable , their discussions were retrospectively meaningless, by your definition..despite the fact that they were communicating successfully. You need to show that the ordinary theory is false, and to show that verificationism is true.

Of course, that's an appeal to the standard theory..and there is nothing wrong with that. You are making a novel, contentious claim about what "meaning" always meant. Verificationism less than a hundred years old. Modern linguistics is at least twice as old. We were never in a vacuum about what "meaning" means.

Your stipulative definition doesn't match ordinary usage, so why should anyone accept it?

Replies from: ike
comment by ike · 2020-08-15T20:13:24.736Z · LW(p) · GW(p)

And if it turns out that the new particle or force is indetectable , their discussions were retrospectively meaningless, by your definition..despite the fact that they were communicating successfully.

No - those discussions were about a meaningful purely mathematical question - does X map differ from Y map in an observable manner?

Re standard model - as elsewhere, most people will agree with both of these statements:

  1. There is some sense in which "external reality exists" is true
  2. This is not related to ease of communication. External reality would exist even without any communication and if language had never been invented.

I don't see how those can be squared away with an account of meaning solely grounded in communication, when the parties to that communication will strongly dispute that their meaning is grounded in communication.

Replies from: TAG
comment by TAG · 2020-08-15T20:45:52.390Z · LW(p) · GW(p)

I don’t see how those can be squared away with an account of meaning solely grounded in communication, when the parties to that communication will strongly dispute that their meaning is grounded in communication.

People can be wrong. You could be under the impression that you are speaking Mandarin when you are speaking English. The object level is not the meta level. If you can't speak English , and I can't speak mandarin, the fact that we are communicating means one of us is wrong.

Replies from: ike
comment by ike · 2020-08-15T21:03:18.068Z · LW(p) · GW(p)

If they can be wrong about that, why can't they be wrong about whether what they're saying is meaningful?

Replies from: TAG
comment by TAG · 2020-08-15T21:06:40.848Z · LW(p) · GW(p)

Because there is evidence of successful communication in coordinated action.

Replies from: ike
comment by ike · 2020-08-15T22:54:04.927Z · LW(p) · GW(p)

What coordinated action is taken that can only be explained by assuming they've managed to communicate about something unverifiable in my sense (i.e. external reality)?

Replies from: TAG
comment by TAG · 2020-08-15T23:43:54.756Z · LW(p) · GW(p)

I don't have to assume your sense is correct .

Replies from: ike
comment by ike · 2020-08-15T23:49:30.111Z · LW(p) · GW(p)

That's not responsive to my question. I didn't say you needed to assume that.

Replies from: TAG
comment by TAG · 2020-08-16T00:26:22.931Z · LW(p) · GW(p)

Recall that verificationism and anti realism are different things. I don't have to prove realism in order to show that the verificationist criterion of meaning is not the only one.

Replies from: ike
comment by ike · 2020-08-16T00:54:06.502Z · LW(p) · GW(p)

Sure, I don't think anything I've said is inconsistent with that?

Replies from: TAG
comment by TAG · 2020-08-17T16:46:51.566Z · LW(p) · GW(p)

Whenever I try to put forward a defense of realism , you say it is in meaningless under the verificationist definition of meaning.

But everyone knows that realism is hard to justify using verificationism.

The counter argument is that since realism is valuable, at least to some, verificationism is too limited as a theory of meaning .

So there is a ponens/ tolens thing going on.

Replies from: ike
comment by ike · 2020-08-17T17:53:54.672Z · LW(p) · GW(p)

Which definition have you put forward? My complaint is that the definitions are circular.

>The counter argument is that since realism is valuable, at least to some, verificationism is too limited as a theory of meaning .

I would deny that one can meaningfully have preferences over incoherent claims, and note that one can't validly reason that something is coherent based on the fact that one can have a preference over it, as that would be question-begging.

That said, if you have a good argument for why realism can be valuable, it might be relevant. But all you actually have is an assertion that some find it valuable.

Meanwhile, you've asserted both that communication implies meaning, and that parties to a communication can be mistaken about what something means. I don't see how the two are consistent.

Replies from: TAG
comment by TAG · 2020-08-17T18:54:06.176Z · LW(p) · GW(p)

I would deny that one can meaningfully have preferences over incoherent claims

Meaningless! Incoherent!

Incoherent! Meaningless!

Replies from: ike
comment by ike · 2020-08-17T19:12:03.314Z · LW(p) · GW(p)

Those words are interchangable. Not sure what your point is.

comment by TAG · 2020-08-13T17:04:38.343Z · LW(p) · GW(p)

I do that by trying to draw a coherent picture of how one can do all the things "we want to do* without referring to thm*ese purported meaningless statements,

There's no uniform set of values that everyone has.

There are different ways of responding to the failure of strong scientific realism. One is to adopt instrumentalism , aiming only for predictive accuracy,and abandoning the search for ontological truth. It's difficult to be a consistent instrumentalist, in something like the way it's difficult to be a committed solipsist. People tend to get involved in science because they care about the nature of reality. If you care about the nature of reality, instrumentalism means giving up on some expected value

Replies from: ike
comment by ike · 2020-08-13T18:03:53.948Z · LW(p) · GW(p)

care about the nature of reality

I'm not sure why I should care about something I consider to be incoherent.

If there's something specific you think you'd like to do and can't without realism, then can you name it? EY named altruism past the observable border, but as I pointed out any kind of altruism requires the same realism assumption, and altruism remains hard to ground even with that assumption, so it's not particularly compelling as an argument for realism, let alone as an argument that realism is coherent.

Replies from: TAG
comment by TAG · 2020-08-13T18:13:49.973Z · LW(p) · GW(p)

I’m not sure why I should care about something I consider to be incoherent

I am not dictating to you what your values should be. I am just pointing out that you don't get to dictate to others what their values should be.

If there’s something specific you think you’d like to do and can’t without realism, then can you name it

It's hard to even define "do" without realism. You need a real mount Everest to climb mount Everest.

And haven't you said that you are not an anti-realist?

Replies from: ike
comment by ike · 2020-08-13T18:33:54.507Z · LW(p) · GW(p)

> I am just pointing out that you don't get to dictate to others what their values should be.

For coherent values, sure. But a preference for something incoherent to be correct doesn't seem like a real value to me.

>You need a real mount Everest to climb mount Everest.

But not to have the experience of climbing mount Everest.

>And haven't you said that you are not an anti-realist?

Yes, realism is incoherent. If you disagree, at least some of the burden should fall on you to define it or explain why it's a useful concept.

Replies from: TAG
comment by TAG · 2020-08-13T18:40:43.022Z · LW(p) · GW(p)

But a preference for something incoherent to be correct doesn’t seem like a real value to me.

You haven't defined incoherence, and you are also treating the preference for coherence like an objective value anyone must have.

Replies from: ike
comment by ike · 2020-08-13T18:49:55.790Z · LW(p) · GW(p)

I don't have a preference for coherence. That's not a value. It's just part of the meaning of value, that it must be coherent.

I haven't given a rigorous definition for coherence, but I've given some criteria - a coherent concept should be capable of explanation, and so far that's not been met.

Replies from: TAG
comment by TAG · 2020-08-14T10:30:24.740Z · LW(p) · GW(p)

It’s just part of the meaning of value, that it must be coherent

Is there some experiment or observation that proves that statement to be true?

Replies from: ike
comment by ike · 2020-08-14T23:17:30.462Z · LW(p) · GW(p)

Definitions aren't true or false, they can be representative of what people mean by certain words to a greater or lesser extent. If you poll people and ask if values can be incoherent and still valid I expect most to say no?

Replies from: TAG
comment by TAG · 2020-08-15T09:37:12.769Z · LW(p) · GW(p)

So you are almost but not quite saying that there is a separate category of analytical truths.

comment by TAG · 2020-08-13T16:40:55.887Z · LW(p) · GW(p)

What? What’s a “map claim”?

We have a map that says “X”. If the map also says “X exists”, that’s not adding anything useful or meaningful to the map.

But saying "this is a map of Narnia, which does not exist" ,says something useful.. something which stops you sailing off to find Narnia.

Replies from: ike
comment by ike · 2020-08-13T16:59:44.598Z · LW(p) · GW(p)

"this is a map of Narnia which should not be used to predict experiences"

Really, using the Solomonoff induction model, you'd just put a very low probability weight on that map since Narnia is complicated to define.

Replies from: TAG
comment by TAG · 2020-08-13T17:08:29.035Z · LW(p) · GW(p)

I think you are right that Solomonoff can't be simultaneously expected to give you both efficient predictions and correspondence to objective reality.

comment by ike · 2020-08-13T17:04:54.482Z · LW(p) · GW(p)

Anti realism needs key terms like "map","observer", "experience" and predict. Giving clear definitions is a problem everyone faces.

And it's hard to see it how it can get by without an ontology. Not only does "observer" need to be defined , observers need to exist. Strong anti realism seems to be incoherent.

Again, I'm not quite anti-realist, but I can define those words. A map is a mental model I use to make predictions. I don't need the term observer nor do I need observers to exist - all I really need is for myself to exist, and I'm fairly comfortable with Descartes on that regard. So yes, I exist and I have experiences, and I can know that directly. Anything beyond that is meaningless.

Replies from: TAG
comment by TAG · 2020-08-13T17:10:18.384Z · LW(p) · GW(p)

Who are you talking to?

Replies from: ike
comment by ike · 2020-08-13T17:59:17.524Z · LW(p) · GW(p)

My map says that if I talk in a certain manner I'll get interesting feedback in return. So far it's worked out pretty well.

Of course, I might internalize the usage of this map to an extent where I don't explicitly think of it as a map unless reflecting on it.

Replies from: TAG
comment by TAG · 2020-08-13T18:21:53.542Z · LW(p) · GW(p)

My map says that if I talk in a certain manner I’ll get interesting feedback in return.

But I could say the same thing about you, making your argument incoherent ;-)

Replies from: ike
comment by ike · 2020-08-13T18:34:28.360Z · LW(p) · GW(p)

How exactly does that make my argument incoherent?

Replies from: TAG
comment by TAG · 2020-08-13T18:36:44.235Z · LW(p) · GW(p)

You've never provided an exact definition of incoherence, why should I (whoever I am)

Replies from: ike
comment by ike · 2020-08-13T18:43:22.369Z · LW(p) · GW(p)

I've identified a specific concept, asserted it lacks meaning, and built up a worldview that doesn't require it. I don't know which part of my argument or which terms you object to here.

Replies from: TAG
comment by TAG · 2020-08-13T18:46:46.447Z · LW(p) · GW(p)

That would be the asserting rather than proving.

Replies from: ike
comment by ike · 2020-08-13T18:51:18.939Z · LW(p) · GW(p)

What are you trying to say? What are you objecting to?

Replies from: TAG
comment by TAG · 2020-08-13T18:57:01.504Z · LW(p) · GW(p)

I’ve identified a specific concept, asserted it lacks meaning

Replies from: ike
comment by ike · 2020-08-13T18:59:06.322Z · LW(p) · GW(p)

How would you like me to prove a negative, other than pointing out the total lack of reason to believe the positive and building up an alternative theory without it?

Replies from: TAG
comment by TAG · 2020-08-13T19:03:47.477Z · LW(p) · GW(p)

The reason to build an alternative is s value system other than yours.

Replies from: ike
comment by ike · 2020-08-13T19:09:30.332Z · LW(p) · GW(p)

Can you give a specific value system and explain how realism helps it? Right now you seem to be vaguely claiming that an alternative system could theoretically be useful, without really explaining how it would be coherent or what the system is.

Replies from: TAG
comment by TAG · 2020-08-15T09:55:55.276Z · LW(p) · GW(p)

Any value system where you care about what is real.

EY didn't wholeheartedly embrace verificationism, because he cares about MW being real, and God not being real.

Replies from: ike
comment by ike · 2020-08-15T15:28:05.201Z · LW(p) · GW(p)

The God hypothesis does produce different predictions. MW produces different predictions to the extent that you accept quantum immortality, although "I predict I will never have died" is not quite a prediction as to sensory data so I'm not sure if it counts.

Regardless, I view believing in quantum or modal immortality as consistent with my view.

Replies from: TAG
comment by TAG · 2020-08-15T15:48:31.591Z · LW(p) · GW(p)

The Deism hypothesis does not lead to different predictions.

QI is not empirical evidence as usually understood.

Replies from: ike
comment by ike · 2020-08-15T16:54:58.984Z · LW(p) · GW(p)

Deism is incoherent on my views, to the extent it makes no predictions. I don't know why you'd *care* if it was "correct" or not, even if that was somehow a coherent concept.

QI is weird regardless of realism, I think it's an edge case but I don't think it really presents a problem for my view.

Replies from: TAG
comment by TAG · 2020-08-15T18:24:16.495Z · LW(p) · GW(p)

Deism is incoherent on my views, to the extent it makes no predictions.

That isn't the definition of incoherence that you previously offered.

QI is weird regardless of realism, I think it’s an edge case

The definition of coherence you previously offered was that everything gets unamiguously sorted into either of two categories.

Replies from: ike
comment by ike · 2020-08-15T19:26:39.088Z · LW(p) · GW(p)

>That isn't the definition of incoherence that you previously offered.

I'm not offering a definition in the comment you replied to, I'm simply stating a consequence of my earlier definition. To the extent Deism does not make predictions, it's a claim about external reality and is meaningless.

>The definition of coherence you previously offered was that everything gets unamiguously sorted into either of two categories.

What is the exact claim that you think is ambiguously coherent under my definition?

comment by Ericf · 2020-08-13T06:03:50.541Z · LW(p) · GW(p)

"Electrons exist" means I anticipate other people acting in a way that matches how they would act if their observations matched my map of how electrons function. Verbal shorthands are useful things.

Replies from: ike
comment by ike · 2020-08-13T06:08:57.967Z · LW(p) · GW(p)

You're agreeing with me - I acknowledged in the post that such shorthands are consistent with my view.

But EY clearly rejects that view, and others I've run this by also reject it. If I could get everyone to agree that the meaning is tied to expectations, that would be a success.

Replies from: Ericf
comment by Ericf · 2020-08-13T06:58:56.819Z · LW(p) · GW(p)

Why? In what domain does unpacking the definition of "exists" lead to more clarity?

This looks a lot like saying "5 isn't real, it's just 1 plus 1 plus 1 plus 1 plus 1"

Replies from: ike
comment by ike · 2020-08-13T13:28:26.993Z · LW(p) · GW(p)

Well, you have a definition of exists, and others have a different definition. Pointing out that the definition others use is incoherent might not directly lead to greater clarity, but it's a valid point to make.