Outline of Metarationality, or much less than you wanted to know about postrationality

post by Gordon Seidoh Worley (gworley) · 2018-10-14T22:08:16.763Z · LW · GW · 151 comments

Contents

153 comments

There was a recent discussion on Facebook that led to an ask for a description of postrationality that isn't framed in terms of how it's different from rationality (or rather perhaps more a challenge that such a thing could not be provided). I'm extra busy right now until at least the end of the year so I don't have a lot of time for philosophy and AI safety work, but I'd like to respond with at least an outline of a constructive description of post/meta-rationality. I'm not sure everyone who identifies as part of the metarationality movement would agree with my construction, but this is what I see as the core of our stance.

Fundamentally I think the core belief of metarationality is that epistemic circularity (a.k.a. the problem of the criterion, the problem of perception, the problem of finding the universal prior) necessitates metaphysical speculation, viz. we can't reliably say anything about the world and must instead make one or more guesses to overcome at least establishing the criterion for assessing truth. Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.

None of this is radical; it's in fact all fairly standard philosophy. What makes metarationality what it is comes from the deep integration of this insight into our worldview. Rather than truth or some other criteria, telos (usefulness, purpose) is the highest value we can serve, not by choice, but by the trap of living inside the world and trying to understand it from experience that is necessarily tainted by it. The rest of our worldview falls out of updating our maps to reflect this core belief.

To say a little on this, when you realize the primacy of telos in how you make judgments about the world, you see that you have no reason to privilege any particular assessment criterion except in so far as it is useful to serve a purpose. Thus, for example, rationality is important to the purpose of predicting and understanding the world often because we, through experience, come to know it to be correlated with making predictions that later happen, but other criteria, like compellingness-of-story and willingness-to-life, may be better drivers in terms of creating the world we would like to later find ourselves in. For what it's worth, I think this is the fundamental disagreement with rationality: we say you can't privilege truth and since you can't it sometimes works out better to focus on other criteria when making sense of the world.

So that's the constructive part; why do we tend to talk so much about postrationality by contrasting it with rationality? I think two reasons. One, postrationality is etiologically tied to rationality: the ideas come from people who first went deep on rationality and eventually saw what they felt were limitations of that worldview, thus we naturally tend to think in terms of how we came to the postrationalist worldview and want to show others how we got here from there. Second and relatedly, metarationality is a worldview that comes from a change in a person that many of us choose to identify with Kegan's model of psychological development, specifically the 4-to-5 transition, thus we think it's mainly worthwhile to explain our ideas to folks we'd say are in the 4/rationalist stage of development because they are the ones who can directly transition to 5/metarationality without needing to go through any other stages first.

Feel free to ask questions for clarification in the comments; I have limited energy available for addressing them but I will try my best to meet your inquiries. Also, sorry for no links; I wouldn't have written this if I had to add all the links, so you'll have to do your own googling or ask for clarification if you want to know more about something, but know that basically every weird turn of phrase above is an invitation to learn more.

151 comments

Comments sorted by top scores.

comment by clone of saturn · 2018-10-15T17:09:36.391Z · LW(p) · GW(p)

It seems to me that the whole circularity issue was answered by Eliezer in Where Recursive Justification Hits Bottom [LW · GW]. What's your disagreement with that post?

Replies from: TAG, gworley
comment by TAG · 2018-10-15T20:56:23.846Z · LW(p) · GW(p)

If you can't justify your foundational beliefs,you might as well stick with them... although, once you have recognised that, you also shouldn't put a high level of credence on any of your beliefs.

Edit: there are two axes here: what epistemology you should use, and how confident you should be in it.

comment by Gordon Seidoh Worley (gworley) · 2018-10-15T17:48:43.451Z · LW(p) · GW(p)

None especially I think (although I haven't reread it). You necessarily must ultimately take a guess (metaphysical speculation, a leap of faith) and just get on with it; perhaps the difference is in how much the great doubt imposed by this necessary guess influences one's worldview: the straw rationalist says "great, now we can get on to other things!" and the straw postrationalist says "good enough for the moment, we have to get on with other things".

Replies from: monktastic
comment by monktastic · 2020-05-03T22:51:18.095Z · LW(p) · GW(p)

To put it in other terms: the straw rationalist becomes cognitively fused with their worldview, and the postrationalist does not. Even when one believes that one is not fused with a worldview, there is almost always a cognitive fusion with one's metaphysics going on. It's what gives rise to one's very experience of reality in the first place.

comment by jessicata (jessica.liu.taylor) · 2018-10-15T02:28:27.724Z · LW(p) · GW(p)

Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.

Doesn't this have the standard issue with philosophical pragmatism, i.e. that knowing what is useful requires knowing about reality? (In other words, reducing questions of truth to questions of usefulness reduces one question to a different one that is no easier)

Certainly, ontologies must be selected partially based on criteria other than correspondence with reality (such as analytic tractability), but for these ontologies to be useful in modeling reality, they must be selected based on a pre-ontological epistemology, not only a pre-ontological telos.

Replies from: gworley, TAG, TAG
comment by Gordon Seidoh Worley (gworley) · 2018-10-15T02:58:17.870Z · LW(p) · GW(p)

In general I'd say postrationality does necessarily make use of the pre-ontological, what I'd call the ontic, things in themselves, or things-as-things. I think this is why most serious postrationalist-identifying folks I know practice some form of meditation: we need a way to get in touch with reality with as little ontology as possible, and meditation includes well-explored techniques for doing just that. Because you are right: ultimately we end up grounding things in what we don't and can't know (and maybe can't even experience), which I call metaphysical speculation, and is pointing at the same thing as Kierkegaard's "leap of faith" (although without Kierkegaard's Christian bias). Much of the challenge we face is dealing with the problem of balancing the uncertainty from speculation and the pragmatic need to get on with life.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-10-15T03:41:03.686Z · LW(p) · GW(p)

I guess what I am getting at is: Kierkegaard's pre-ontology doesn't selectively choose an ontology that has high correspondence with reality, so he has a weak pre-ontolological epistemology. It is possible to have a better pre-ontological epistemology that Kierkegaard. Meditation probably helps, as do the principles discussed in this post on problem formulation. (To the extent that I take pre-ontology/meta-ontology seriously, I guess I might be a postrationalist according to some definitions)

A specific example of a pre-ontological epistemology is a "guess-and-check-and-refine" procedure, where you get acquainted with the phenomenon of interest, come up with some different ontologies for it, check these ontologies based on factors like correspondence with (your experience of) the phenomenon and internal coherence, and refine them when they have problems and it's possible to improve them. This has some similarities to Solomonoff induction though obviously there are important differences. Even in the absence of perfect knowledge of anything and without resolving philosophical skepticism, this procedure selectively chooses ontologies that have higher correspondence with reality.

I guess you could describe this as "selecting an ontology based on how useful it is according to your telos" but this seems like a misleading description; the specific criteria used aren't directly about usefulness, and relate to usefulness largely through being proxies for truth.

It's quite possible that we don't disagree on any of these points and I'm just taking issue with your description.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-15T18:07:53.072Z · LW(p) · GW(p)
It's quite possible that we don't disagree on any of these points and I'm just taking issue with your description.

That might well be the case. I don't have much of an answer about how to address the ontic directly without ontology and view learning about how to more fully engage it a key aspect of the zen practice I engage in, but zen also pushes you away from using language about these topics so while I may be getting more in touch with the ontic myself I'm not developing a skill to communicate about it, largely because the two are viewed to be in conflict and learning to talk about it obscures the ability to get in direct contact with it. This seems a limitation of the techniques I'm using but I'm not (yet) in a position to either say it's a necessary limitation or that we can go beyond it.

Telos/purpose/usefulness/will is my best way of talking about what I might describe as the impersonal animating force of the universe that exists prior to our understanding of it, but I agree something is lost when I try to nail it down into language and talk about usefulness to a purpose because it puts it in the language of measurement, although I think you are right that truth is often instrumentally so important to any purpose that it ends up dominating our concerns such that rationality practice is dramatically more effective at creating the world we desire than most anything else, hence why I try to at least occasionally emphasize that metarationality seeks to realize the limitations of rationality so that we can grapple with them while also not forgetting how useful rationality is!

Replies from: c0rw1n
comment by c0rw1n · 2018-10-16T01:11:02.454Z · LW(p) · GW(p)Replies from: Raemon
comment by Raemon · 2018-10-16T01:31:37.994Z · LW(p) · GW(p)

Mod Note: this comment seems more confrontational than it needs to be. (A couple other comments in the thread in the thread also seem like they probably cross the line. I haven't had time to process everything and form a clear opinion, but wanted to make at least a brief note)

(this is not a comment one way or another on the overall conversation)

Added: It seems the comment I replied to has been deleted.

comment by TAG · 2018-10-15T21:21:35.897Z · LW(p) · GW(p)

We don't have the option to trade off between truth and usefulness, because we don't have a means of establishing truth, in the sense of correspondence to reality, separately from usefulness,in the sense of predictive accuracy . If you are a typical scientific philosopher, you will treat usefulness, or predictive power as a substitute for correspondence to reality without understanding how it could work.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-10-15T21:38:30.939Z · LW(p) · GW(p)

Aren't there lots of false beliefs that are compatible with good predictions and we know are false? E.g. the chocolate cake hypothesis [LW · GW].

Replies from: TAG
comment by TAG · 2018-10-15T21:54:58.530Z · LW(p) · GW(p)

Lots compared to what? How do you compare that number to the number of predictively adequate models which are false for unkown reasons?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-10-15T22:02:07.986Z · LW(p) · GW(p)

I'm pretty confused at this point. You started with a fairly universal statement: "we don’t have a means of establishing truth, in the sense of correspondence to reality, separately from usefulness,in the sense of predictive accuracy". I named a counterexample: the chocolate cake hypothesis. This invalidates the universal claim, unless I'm misinterpreting something here.

It's transparently obvious that there are lots of hypotheses similar to the chocolate cake hypotheses (it could be a vanilla cake, or a cherry cake, or...). I'm not making any relative statement about how many of these there are compared to anything else.

Replies from: TAG
comment by TAG · 2018-10-15T22:08:25.482Z · LW(p) · GW(p)

Then let me restate my point as 'we dont have a general means...'

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-10-15T22:16:38.271Z · LW(p) · GW(p)

Ok. What do you think of cartography? Is mapping out a new territory using tools like measurement and spacial representation a process that does not establish truth separately from predictive accuracy?

It seems wrong (and perhaps a form of scientism) to frame cartography in terms of predictive accuracy, since, while the maps do end up having high predictive accuracy, the mapmaking process does not involve predictive accuracy directly, only observation and recording, and predictive accuracy is a side effect of the fact that cartography gives accurate maps.

This actually seems like a pretty general phenomenon: predictive accuracy can't be an input into your epistemic process, since predictions are about the future. Retrodictions (i.e. "predictions" of past events) can go into your epistemic process, but usefulness is more about predictive ability rather than retrodictive ability.

Replies from: TAG, Elo
comment by TAG · 2018-10-15T23:07:54.107Z · LW(p) · GW(p)

. What do you think of cartography? Is mapping out a new territory using tools like measurement and spacial representation a process that does not establish truth separately from predictive accuracy

It is a process that does not establish truth separately from predictive accuracy. You can have 100% predictively accurate cartography in a simulation.

observation and recording,

observation of what? Having a perception "as if" of something doesn't tell you what the ultimate reality is.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-10-15T23:20:05.174Z · LW(p) · GW(p)

I don't know what you mean by "separate from" at this point and it's probably not worth continuing discussion until that's clearer. (In what sense can anything be separate from anything else in a causally connected universe?)

I mean observation in the conventional sense (getting visual input and recognizing e.g. objects in it), which in humans requires a working visual cortex. Obviously cartography doesn't resolve philosophical skepticism and I'm not claiming that it does, only that it works in producing accurate representations of the territory given assumptions that are true of the universe we inhabit.

Replies from: TAG
comment by TAG · 2018-10-15T23:24:26.179Z · LW(p) · GW(p)

what sense can anything be separate from anything else in a causally connected universe?

So the orthogonality thesis is a priori false?

Obviously cartography doesn’t resolve philosophical skepticism

But that is exactly what I am taking about!

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-10-15T23:27:42.211Z · LW(p) · GW(p)

Yes, an agent's goals aren't causally or probabilistically independent of its intelligence, though perhaps a weaker claim such as "almost any combination is possible" is true.

EDIT: re philosophical skepticism: okay, so how does bringing in predictive accuracy help? That doesn't resolve philosophical skepticism either (see: no free lunch theorems).

Replies from: TAG
comment by TAG · 2018-10-16T06:53:40.645Z · LW(p) · GW(p)

Even if the universal claim that complete orthogonality is impossible is true - - - I notice in passing that it is argued for with a claim about how the world works, so that you are assuming scepticism has been resolved in order to resolve scepticism - - even if it is true, the correlation between prediction and correspondence could be 0.0001%.

Predictive accuracy doesn't help with philosophical scepticism. It is nonetheless worth pursuing because it has practical benefits.

comment by Elo · 2018-10-15T22:32:43.776Z · LW(p) · GW(p)

To say that a map exists, is to propose that it has substance within the Territory. Existing within the concept of being fathomable and describable. In that sense a map is in the territory too.

Rationality: that map is not the territory.

Post rationality: "the map is not the territory" is a category error. Where the NotTerritory.map is in a broader territory that post rationality has stopped pretending doesn't exist. That broader territory has a map that is the territory.

comment by TAG · 2018-10-16T06:44:22.142Z · LW(p) · GW(p)

It's much worse than that. We don't have the option of selecting an ontology by its correspondence to reality, separately from because we don't have a direct test for it, only the assumption that it coincides with predictive power, and simplicity somehow add up to an indirect test.

comment by cousin_it · 2018-10-16T09:59:08.610Z · LW(p) · GW(p)

Fundamentally I think the core belief of metarationality is that epistemic circularity (a.k.a. the problem of the criterion, the problem of perception, the problem of finding the universal prior) necessitates metaphysical speculation, viz. we can’t reliably say anything about the world and must instead make one or more guesses to overcome at least establishing the criterion for assessing truth.

I don't think that's a problem. Any reasoning process needs an unquestioned "core" - that's just as true for a person as for an automatic theorem prover. And different people's "cores" seem to agree on observations a lot, making science possible.

Replies from: Vladimir_Nesov, gworley, TAG
comment by Vladimir_Nesov · 2018-10-16T10:30:05.359Z · LW(p) · GW(p)

You can decide to question any such principles, which is how they get formulated in the first place, as designs for improved cognition devised by an evolved mind that doesn't originally follow any particular crisp design, but can impose order on itself. The only situation where they remain stable is if the decision always comes out in their favor, which will happen if they are useful for agents pursuing your preference. When these agents become sufficiently different, they probably shouldn't use any object level details of the design of cognition that holds for you. The design improves, so it's not the same.

Examples of such principles are pursuit of well-calibrated empirical beliefs, of valid mathematical knowledge, of useful plans, and search for rational principles of cognition.

I don't know how to describe the thing that remains through correct changes, which is probably what preference should be, so it's never formal. There shouldn't be a motivation to "be at peace" with it, since it's exactly the thing you turn out to be at peace with, for reasons other than being at peace with it.

comment by Gordon Seidoh Worley (gworley) · 2018-10-16T17:57:49.256Z · LW(p) · GW(p)

Hmm, this seems sensible as far as it goes, but then how do you pick the unquestioned core? I imagine you're thinking of this like picking the axioms of a system. And so long as you work only within the system you're right that it doesn't matter; where it becomes a big deal is when the choice of axioms influences how that system relates to things outside itself. That's why we view it as worth considering, because sufficiently interesting systems that are consistent cannot also be complete.

Replies from: clone of saturn, cousin_it
comment by clone of saturn · 2018-10-16T18:23:16.144Z · LW(p) · GW(p)

The part of you that's generating your thoughts is the unquestioned core. It's too late to pick the unquestioned core, you already are the unquestioned core.

Replies from: Richard_Kennaway, TAG
comment by Richard_Kennaway · 2018-10-17T11:54:05.384Z · LW(p) · GW(p)

It's only unquestioned for the moment, not unquestionable. You start from where you happen to be. That is as true of deep philosophy as of every other activity.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-17T18:01:36.912Z · LW(p) · GW(p)

So jumping back in here, my original line of comment was towards cousin_it who seemed to be suggesting (to me) some choice of unquestioned core the way we pick axioms, rather than jumping past that to the real core, which I think is interesting and worthing saying a little about because the reasons for why it's unquestionable (at least for a practical sense of questioning to which we can reasonably expect an answer) get at the heart of the epistemological issues I see as the root of the postrationalist worldview.

I've largely emphasized the problem of the criterion because that's the formulation of this epistemological issue that has precedence in Western philosophy, but we see the same issue pop up in mathematics and logic as the problem of induction, in analytic philosophy as the grounding problem, and in Bayesian epistemology as the question of the universal prior. But given the direction of this discussion, I'd like to bring up the approach to it from the phenomenological tradition through the problem of perception.

The problem of perception is that, if we use our senses to learn about the world, then we cannot trust our senses to reliably provide us information about our senses. You've no doubt experienced this first hand if you've ever seen an optical illusion or felt imagined touches or had your smell and taste tricked by unusual combinations of ingredients into making you think you were eating something other than you were. Or our senses may have blind spots, the way you can't directly look into the center of your own pupil because there's a blindspot in the middle of your vision. And these are just the times we are able to notice something weird is happening; we literally don't and can't know about the things our senses my obscure from us.

If you practice meditation or phenomenological reduction, you'll find there is a core loop you can't bracket away of some thing observing itself (or rather in epoche you find you just keep bracketing the same thing over and over again without managing to strip anything further away). We don't have to put a name on it, but some have called it pure awareness, consciousness, and the inner self. Epoche provides a way to see this thing analytically, and meditation provides a way to experience nothing but it (to experience nothing but experience itself).

So when clone of saturn says you already are the unquestioned core, this is what they are pointing at, something so small and hard and fundamental to how we know that we can't question it in any meaningful way. And this exposes another way of seeing the difference between rationality and postrationality (or at least a difference of emphasis): the rationalist project seems to me to either deny this hard, unquestionable thing or make universal assumptions about it, and the postrationalist project sees it as a free variable we must learn to work with in different contexts.

Replies from: jessica.liu.taylor, Richard_Kennaway
comment by jessicata (jessica.liu.taylor) · 2018-10-17T19:23:45.547Z · LW(p) · GW(p)

The idea clone of saturn stated is discussed in the sequences, in Created Already In Motion [LW · GW]:

The Tortoise's mind needs the dynamic of adding Y to the belief pool when X and (X→Y) are previously in the belief pool. If this dynamic is not present—a rock, for example, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y until the end of eternity, without ever getting to Y.

The phrase that once came into my mind to describe this requirement, is that a mind must be created already in motion. There is no argument so compelling that it will give dynamics to a static thing. There is no computer program so persuasive that you can run it on a rock.

And in No Universally Compelling Arguments [LW · GW]:

And this (I then replied) relies on the notion that by unwinding all arguments and their justifications, you can obtain an ideal philosophy student of perfect emptiness, to be convinced by a line of reasoning that begins from absolutely no assumptions.

But who is this ideal philosopher of perfect emptiness? Why, it is just the irreducible core of the ghost!

And that is why (I went on to say) the result of trying to remove all assumptions from a mind, and unwind to the perfect absence of any prior, is not an ideal philosopher of perfect emptiness, but a rock. What is left of a mind after you remove the source code? Not the ghost who looks over the source code, but simply... no ghost.

So—and I shall take up this theme again later—wherever you are to locate your notions of validity or worth or rationality or justification or even objectivity, it cannot rely on an argument that is universally compelling to all physically possible minds.

comment by Richard_Kennaway · 2018-10-17T20:48:54.778Z · LW(p) · GW(p)

I find it difficult to know what to make of the concrete statements here, because they seem so obviously false.

You've no doubt experienced this first hand if you've ever seen an optical illusion

It is by our senses that we know that these things are optical illusions.

you can't directly look into the center of your own pupil

You can do just that in a mirror. BTW, the blind spot is not in the middle of the visual field. but off to the side. It is easy to see it, though, by closing one eye and attending to where the blind spot in the other eye is. When the optician shines an ophthalmoscope into my eye, I can see the blood vessels in my own retina.

we literally don't and can't know about the things our senses my obscure from us.

Our senses fail to show us X-rays, atoms, the curvature of space-time, and anything happening on the other side of the world. but we can very easily know about these things, by clever use of our senses and tools, tools that were created with the aid of the senses.

So, all of the above quotes appear to be obviously, trivially false. Is there some other interpretation, or are they mere deepities?

the postrationalist project sees [the unquestioned core] as a free variable we must learn to work with in different contexts.

I remain sceptical about this unquestionable core. The argument for its existence looks isomorphic to the proof of God as first cause, first knowledge, and first good. But I'll leave that aside. What constitutes working with it?

Replies from: gworley, Elo
comment by Gordon Seidoh Worley (gworley) · 2018-10-18T01:19:44.619Z · LW(p) · GW(p)
So, all of the above quotes appear to be obviously, trivially false. Is there some other interpretation, or are they mere deepities?

My point is that our naive use of our senses often deceive us. This is not meant as a line of evidence in support of my position, but an evocative experience you've probably had that is along the same lines as the thing I'm gesturing at. It is of course different in that we know those things are illusions because it turns out we have more information than we initially think we do, so I am more interested here in the experience of finding that you ask your senses for information about the world and get back what turns out to be misinformation so you have something concrete the grasp onto as a referent for the kind of thing I'm pointing at when I make the more general point about the problem of perception.

Thank you for correcting me on the blind spot thing.

I remain sceptical about this unquestionable core. The argument for its existence looks isomorphic to the proof of God as first cause, first knowledge, and first good. But I'll leave that aside. What constitutes working with it?

It is related, but only because it's the existence of the unexaminable core that creates the free variable that allows us to pick the leap of faith we want to take, be it to God or something else. In fact this is what I would accuse most rationalists of: taking a leap of faith to positivism (that we can establish the truth value of every assertion, or more properly because rationalists are also Bayesians the likelihood of the truth of every assertion), even if it's done out of pragmatism. Working with the unexaminable means remaining deeply skeptical that we know anything or even can know anything and considering the possibility that we are deeply deluded. Most of the time this doubt ends up working out in favor of rationality, but sometimes it seems to not, or at least you're less certain that it does. This invites us to reconsider our most fundamental assumptions about the world and how we know it and be less sure that things are as they seem.

comment by Elo · 2018-10-17T21:04:32.378Z · LW(p) · GW(p)

You can do just that in a mirror.

No that would be a reflection.

http://www.headless.org/experiments/the-mirror.htm

It is by our senses that we know that these things are optical illusions.

By only looking through the visual sense we are able to see things that are. What comes as confusing and "illusion" like is when we use a sense of thoughts/conceptualisation to interpret the visual information.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2018-10-17T22:02:32.335Z · LW(p) · GW(p)
http://www.headless.org/experiments/the-mirror.htm

The optics in that article are completely wrong. To a first approximation, a mirror of a given size shows the same amount of your face independently of how far or near you hold it. It does not blur anything when held very near; it is your eyes that are unable to focus that near. The rest is woo.

Replies from: Elo
comment by Elo · 2018-10-17T23:04:14.070Z · LW(p) · GW(p)

Did you do the experiment?

Replies from: clone of saturn, Richard_Kennaway
comment by clone of saturn · 2018-10-18T04:17:40.563Z · LW(p) · GW(p)

I just tried it, and Richard Kennaway is right.

Replies from: Elo
comment by Elo · 2018-10-21T17:52:31.852Z · LW(p) · GW(p)

I have no idea how this happened. We appear to have different experiences.

the same amount of your face independently of how far or near you hold it.

I'd make one clarification to say, the mirror shows a reflection. The mirror also changes apparent size depending on the distance from the eyes. Now eyeballs obviously don't bend physics to produce experience which is why I say apparent size. I suppose I could say visual size instead..

comment by Richard_Kennaway · 2018-10-18T14:57:46.654Z · LW(p) · GW(p)

It's a feature of mirrors I am familiar with already. But just to please you, I got out a 2-inch-wide circular mirror and did the experiment, and it is as I said and knew it would be.

Exercise: How large was the circular area of my face that I saw in it?

Replies from: Elo
comment by Elo · 2018-10-21T17:48:00.881Z · LW(p) · GW(p)

You really did me a confusing moment. I double checked for myself and then asked someone else too after this response.

I don't know how it's possible to get your results. The science of the inverse square law alone should anticipate experience.

If we live in universes with different laws of physics, that's fine. Carry on!

In response to the question: that A. depends on how far away the mirror is. B. And what scale you use to measure it. C. What the frame of reference is for a measurement. D. If you are measuring your "face" or the reflection.

Replies from: Richard_Kennaway, nshepperd
comment by Richard_Kennaway · 2018-10-22T07:40:08.496Z · LW(p) · GW(p)
I don't know how it's possible to get your results. The science of the inverse square law alone should anticipate experience.

This has nothing to do with the inverse square law, which relates the intensity of light to distance from the source. It's geometrical optics: the paths the light takes.

comment by nshepperd · 2018-10-21T22:57:58.293Z · LW(p) · GW(p)

How big was your mirror, and how much of your face did you see in it?

comment by TAG · 2018-10-17T11:06:11.620Z · LW(p) · GW(p)

No one can convert? No once can reflect on their limitations?

comment by cousin_it · 2018-10-16T19:05:34.238Z · LW(p) · GW(p)

What clone of saturn said.

comment by TAG · 2018-10-16T11:21:17.636Z · LW(p) · GW(p)

Whether it's a problem depends on what you are doing: if you are making strong or exclusive claims to know the truth, it is a problem, if you are modest and/or relativistic, much less so.

comment by nshepperd · 2018-10-14T23:14:17.402Z · LW(p) · GW(p)

Thank you for providing this information.

However, if this is really what 'postrationality' is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.

Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.

You appear to be saying that since it's impossible to be absolutely certain that any particular thing is the truth, that makes it ok to instead substitute any other easier to solve criteria. This is an incredibly weak justification for anything.

Thus, for example, rationality is important to the purpose of predicting and understanding the world often because we, through experience, come to know it to be correlated with making predictions that later happen, but other criteria, like compellingness-of-story and willingness-to-life, may be better drivers in terms of creating the world we would like to later find ourselves in.

This talk of alternative criteria having equal value sounds very good and cosmopolitan, but actually we know exactly what happens when you stop using truth as your criteria for "truth". Nothing good.

Replies from: gworley, TAG, Kaj_Sotala, alkexr
comment by Gordon Seidoh Worley (gworley) · 2018-10-14T23:51:56.565Z · LW(p) · GW(p)

You appear to be saying that since it's impossible to be absolutely certain that any particular thing is the truth, that makes it ok to instead substitute any other easier to solve criteria. This is an incredibly weak justification for anything.

To me this is a strawmanning of postrationality into a thing I wouldn't support and is more akin to the way postmodernism had difficulty because it failed to appreciate all the work rationality does. That the ultimate criterion is telos doesn't excuse one from the need to interact with reality if one wants to successfully serve a purpose. You don't get to just "pick whatever you want" because this would be to ignore all the evidence you have about the world, although this is definitely a way people fail to understand and misapply postrationalist ideas.

This talk of alternative criteria having equal value sounds very good and cosmopolitan, but actually we know exactly what happens when you stop using truth as your criteria for "truth". Nothing good.

This sounds confused to me. I'm not saying all criteria have equal value, I'm saying they are evaluated towards their ability to help you fulfill some purpose. At the risk of putting words in your mouth, it sounds instead as if you think we can assess the criterion of truth, which we cannot and have known we cannot for over 2000 years. The idea of doublethink, which you reference via link, is something different: it's deliberately cultivating a confused relationship with the truth in terms of how you have decided to assess truth. Postrationality doesn't and can't, when properly understood, lead to doublethink because this would be to relate to truth in a way one cannot except by making stronger assumptions than postrationality does about truth.

However, if this is really what 'postrationality' is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.

I will at least agree with you that it's dangerous to folks who are not already rationalists; that's why I've lately preferred not to talk about this topic much except in places that are especially likely to draw an audience already sufficiently versed in rationalist thinking. Without a tight grip on the usefulness of truth and rationality when prediction is your purpose you can easily be led down the path to what we might call pomo hell.

Replies from: nshepperd
comment by nshepperd · 2018-10-15T01:49:13.406Z · LW(p) · GW(p)

At the risk of putting words in your mouth, it sounds instead as if you think we can assess the criterion of truth, which we cannot and have known we cannot for over 2000 years.

But of course we can, as evidenced by the fact that people make predictions that turn out to be correct, and carry out plans and achieve goals based on those predictions all the time.

We can't assess whether things are true with 100% reliability, of course. The dark lords of the matrix could always manipulate your mind directly and make you see something false. They could be doing this right now. But so what? Are you going to tell me that we can assess 'telos' with 100% reliability? That we can somehow assess whether it is true that believing something will help fulfill some purpose, with 100% reliability, without knowing what is true?

The problem with assessing beliefs or judgements with anything other than their truth is exactly that the further your beliefs are from the truth, the less accurate any such assessments will be. Worse, this is a vicious positive feedback loop if you use these erroneous 'telos' assessments to adopt further beliefs, which will most likely also be false, and make your subsequent assessments even more inaccurate.

As usual, Eliezer put it best in Ethical Injunctions.

I will at least agree with you that it’s dangerous to folks who are not already rationalists

Being a rationalist isn't a badge that protects you from wrong thinking. Being a rationalist is the discipline and art of correct thought. When you stop practising correct thought, when you stop seeking truth, being a rationalist won't save you, because at that moment you aren't one.

People used to come to this site all the time complaining about the warning about politics: Politics is the Mind-Killer. They would say "for ordinary people, sure it might be dangerous, but we rationalists should be able to discuss these things safely if we're so rational", heedless of the fact that the warning was meant not for ordinary people, but for rationalists. The message was not "if you are weak, you should avoid this dangerous thing; you may demonstrate strength by engaging the dangerous thing and surviving" but "you are weak; avoid this dangerous thing in order to become strong".

Replies from: Kaj_Sotala, gworley
comment by Kaj_Sotala · 2018-10-15T15:44:19.801Z · LW(p) · GW(p)
But of course we can, as evidenced by the fact that people make predictions that turn out to be correct, and carry out plans and achieve goals based on those predictions all the time.

That people make predictions which turn out to be correct, does not mean show that the predictions were chosen according to the criterion of truth; it shows that the predictions happened to correlate with the truth. E.g. people just doing whatever tradition tells them to, often arrive at good outcomes when they are in an environment that the tradition is well-adapted to. If asked, they might appeal to the good outcomes as evidence for the tradition being correct; but while it might be correct in some circumstances, that does not establish that "what does tradition tell me" would be the correct criteria to use in every circumstance.

The general point here is that the human brain does not have magic access to the criteria of truth; it only has access to its own models. And what it uses to check whether its own models are correlated with the truth are... its own models. It's possible, and in fact very common, to be critically mistaken about whether or not your reasoning is actually tracking reality. The "since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth" from the OP is descriptive rather than prescriptive; our brains are choosing our beliefs based on a wide variety of criteria [LW · GW], some of which correlate with the truth more than others.

I have every now and then had the experience of realizing that something which I'd considered obviously true, was rather an emotional attachment to something which I wanted to be true, but which was masquerading as being grounded in the criterion of truth. And that belief could easily have resisted revision for a very long time, perhaps forever, had I not decided to explicitly investigate it. (A rule of thumb: whenever I think something like "these people have to be idiots for believing in something like this", then my belief probably has a fair part of tribal motivation masquerading as truth-seeking.)

This is a way by which metarationality actually promotes epistemic rationality as well: as jessicata mentions [LW(p) · GW(p)] in the other comment thread, you can have a "guess-and-check-and-refine" model, where you try out many different ontologies and use the one which seems to make the best predictions. But in order to do that effectively, you need to cultivate a lightness of belief, where you are capable of flexibly switching between ontologies - especially testing out the ontologies which you experience as the most repugnant, since those are the ones that you're most likely to have rejected due to tribal rather than logical reasons.

And part of how you achieve that lightness of belief is by internalizing the extent to which your brain does not, and logically cannot, be choosing its beliefs on the basis of truth. As long as you haven't fully internalized that, incorrect beliefs have a much better chance of masquerading as the truth, by making you think that you have chosen them on the basis of their truthfulness alone.

Replies from: nshepperd
comment by nshepperd · 2018-10-15T16:37:08.949Z · LW(p) · GW(p)

Well, this is a long comment, but this seems to be the most important bit:

The general point here is that the human brain does not have magic access to the criteria of truth; it only has access to its own models.

Why would you think "magic access" is required? It seems to me the ordinary non-magic causal access granted by our senses works just fine.

All that you say about beliefs often being critically mistaken due to eg. emotional attachment, is of course true, and that is why we must be ruthless in rejecting any reasons for believing things other than truth -- and if we find that a belief is without reasons after that, we should discard it. The problem is this seems to be exactly the opposite of what "postrationality" advocates: using the lack of "magic access" to the truth as an excuse to embrace non-truth-based reasons for believing things.

Replies from: Kaj_Sotala, gworley, gworley
comment by Kaj_Sotala · 2018-10-15T18:42:58.216Z · LW(p) · GW(p)
Why would you think "magic access" is required?

Because there's no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality. Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.

Since there's no direct causal pathway, it would have to work through some non-causal means, i.e. magic.

The problem is this seems to be exactly the opposite of what "postrationality" advocates: using the lack of "magic access" to the truth as an excuse to embrace non-truth-based reasons for believing things.

My comment was trying to explain how explicitly adopting beliefs for other reasons than truth might actually help you reject non-truthful beliefs. You can be mistaken about what's actually true, and by testing out ontologies that have been arrived at for other reasons than truth, you may find out that they actually track truth better than your original ontology did. Or even if they don't, by intentionally adopting a different ontology and noticing how it forces your perceptions to fit its mold, you may become more aware of how your normal ontology does likewise. You may also become more aware of how aspects of your normal ontology has also been chosen for their usefulness rather than their truth value. Both of these are useful for revising the original ontology further.

It's less "embracing non-truth-based reasons for believing things" and more "given that our brains are always using non-truth-based reasons for criteria for believing in things, how do we use that to our advantage". To illustrate the difference, suppose an overly-simplified toy model in which at least 20% of your reasons for believing in things are always non-truth-tracking. It's not possible to go below that, but it is possible to go over that.

"Embracing non-truth-based reasons for believing things" sounds to me like it says "well since 20% of our reasons are always non-truth-tracking, it doesn't matter even if we go to 90%". That would be obviously wrong.

Whereas by "given that our brains are always using non-truth-based reasons for criteria for believing in things, how do we use that to our advantage", I mean something like "Well we can't go below 20%, but we can influence what that 20% consists of, so let's swap that desire to believe ourselves to be better than anyone else into some desire that makes us happier and is less likely to cause needless conflict. Also, by learning to manipulate the contents of that 20%, we become better capable at noticing when a belief comes from the 20% rather than the 80%, and adjusting accordingly".

Replies from: nshepperd, Richard_Kennaway
comment by nshepperd · 2018-10-16T00:11:45.206Z · LW(p) · GW(p)

Because there’s no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality.

I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.

Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.

So much the worse for schizophrenics. And so?

“Well we can’t go below 20%, but we can influence what that 20% consists of, so let’s swap that desire to believe ourselves to be better than anyone else into some desire that makes us happier and is less likely to cause needless conflict. Also, by learning to manipulate the contents of that 20%, we become better capable at noticing when a belief comes from the 20% rather than the 80%, and adjusting accordingly”.

I have a hard time believing that this sort of clever reasoning will lead to anything other than making your beliefs less accurate and merely increasing the number of non-truth-based beliefs above 20%.

The only sensible response to the problem of induction is to do our best to track the truth anyway. Everybody who comes up with some clever reason to avoid doing this thinks they've found some magical shortcut, some powerful yet-undiscovered tool (dangerous in the wrong hands, of course, but a rational person can surely use it safely...). Then they cut themselves on it.

Replies from: Kaj_Sotala, gworley, TAG
comment by Kaj_Sotala · 2018-10-17T08:28:07.081Z · LW(p) · GW(p)
I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.

Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.

Somebody else objects, "of course you don't, it just happened to rain by coincidence! You need to repeat that experiment!"

So I repeat the rain-making dance on ten separate occasions, and on seven out of ten times, it does happen to rain anyway.

The skeptic says, "ha, your rain-making dance didn't work after all!" I respond, "ah, but it did work on seven out of ten times; medicine can't be shown to reliably work every time either, but my magic dance does work statistically significantly often."

The skeptic answers, "you can't establish statistical significance without something to compare to! This happens to be rainy season, so it would rain on seven out of ten days anyway!"

I respond, "ah, but notice how it is the custom for people in my tribe do the rain-making dance every day during rainy season, and to not do it during dry season; it is our dance that causes the rainy season."

The skeptic facepalms. "Your people have developed a tradition to dance during rainy season, but it's the rain that has caused your dance, not the other way around!"

... and then we go on debating forever.

My point here is that just looking at raw observations is insufficient to judge any nontrivial model. We are always evaluating our observations in light of an existing model; it is the observation + model that says whether something is true, not the observation itself. I dance and it rains, and my model says that dancing causes rain: my predicted observation came true, so I consider my model validated. The skeptic's model says that dancing does not cause rain but that it rains all the time during the rainy season anyway, so he consider his own model just as confirmed by the observation.

You can, of course, use observations to evaluate models. But to do that, you need to use a meta-model. When I say that we don't have direct access to the truth, this is what I mean; both you, me, and the schizophrenic all tend to think that we are correctly drawing the right conclusions from our observations, but at least one of us is actually running seriously flawed models and meta-models, and may never know it, being trapped in evaluating all of their models through seriously flawed meta-models.

As clone of saturn notes [LW(p) · GW(p)], the deepest meta-model of them all is the one that is running below the level of conscious decisions; the set of low-level processes which decides what actions we take and what thoughts we think. This is a reinforcement learning system which responds to rewards: if particular thoughts or assumptions (such as assumption of a rain-making dance actually producing rain, or the suggestion of statistical significance being an important factor to consider when evaluating predictions) have led to actions which brought the organism (internally or externally generated rewards), then those kinds of thoughts and assumptions will be reinforced.

In other words, we end up having the kinds of beliefs that seem useful, as evaluated by whether they succeed in giving us rewards. Epistemic and instrumental rationality were the same all along. (I previously discussed this in more detail in my posts What are concepts for [LW · GW] and World-models as tools [LW · GW].)

I have a hard time believing that this sort of clever reasoning will lead to anything other than making your beliefs less accurate and merely increasing the number of non-truth-based beliefs above 20%.

Well:

  • Do you think about distances in Metric or Imperial units? Both are equally true, so probably in whichever units you happen to be more fluent in.
  • Do you use Newtonian mechanics or full relativity for calculating the motion of some object? Relativity is more true, but sometimes the simpler model is good enough and easier to calculate, so it may be better for the situation.
  • Do you consider your romantic partner a wonderful person who you love dearly and want to be happy, or someone who does various things that benefit you, in exchange for you doing various things that benefit them? Both are true, but the former framing is probably one that will make for a happier and more meaningful relationship.

You talk about "clever reasoning" that "makes your beliefs less accurate", but as these examples should hopefully demonstrate, at any given time there are an infinite number of more-or-less true ways of looking at some situation - and when we need to choose between several ways of framing the situation which are equally true, we always end up choosing one or the other based on its usefulness. If we didn't, it would be impossible to function, since there'd be no criteria for choosing between them. (And sometimes we go with the approximation that's less strictly true, if it's good enough for the situation; that is, if it's more useful to go with it.) That's the 20%.

Replies from: nshepperd, SaidAchmiz
comment by nshepperd · 2018-10-18T02:50:55.632Z · LW(p) · GW(p)

This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.

But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean;

This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.

Yes, carrying out experiments to determine reality relies on Occam's razor. It relies on Occam's razor being true. It does not in any way rely on me possessing some magical universally compelling argument for Occam's razor. Because Occam's razor is in fact true in our universe, experiment does in fact work, and thus the causal pathway for evaluating our models does in fact exist: experiment and observation (and bayesian statistics).

I'm going to stress this point because I noticed others in this thread make this seemingly elementary map-territory confusion before (though I didn't comment on it there). In fact it seems to me now that conflating these things is maybe actually the entire source of this debate: "Occam's razor is true" is an entirely different thing from "I have access to universally compelling arguments for Occam's razor", as different as a raven and the abstract concept of corporate debt. The former is true and useful and relevant to epistemology. The latter is false, impossible and useless.

Because the former is true, when I say "in fact, there is a causal pathway to evaluate our models: looking at reality and doing experiments", what I say is, in fact, true. The process in fact works. It can even be carried out by a suitably programmed robot with no awareness of what Occam's razor or "truth" even is. No appeals or arguments about whether universally compelling arguments for Occam's razor exist can change that fact.

(Why am I so lucky as to be a mind whose thinking relies on Occam's razor in a world where Occam's razor is true? Well, animals evolved via natural selection in an Occamian world, and those whose minds were more fit for that world survived...)

But honestly, I'm just regurgitating Where Recursive Justification Hits Bottom at this point.

This is a reinforcement learning system which responds to rewards: if particular thoughts or assumptions (...) have led to actions which brought the organism (internally or externally generated rewards), then those kinds of thoughts and assumptions will be reinforced.

This seems like a gross oversimplification to me. The mind is a complex dynamical system made of locally reinforcement-learning components, which doesn't do any one thing all the time.

In other words, we end up having the kinds of beliefs that seem useful, as evaluated by whether they succeed in giving us rewards. Epistemic and instrumental rationality were the same all along.

And this seems simply wrong. You might as well say "epistemic rationality and chemical action-potentials were the same all along". Or "jumbo jets and sheets of aluminium were the same all along". A jumbo jet might even be made out of sheets of aluminium, but a randomly chosen pile of the latter sure isn't going to fly.

As for your examples, I don't have anything to add to Said's observations.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-10-18T07:34:56.589Z · LW(p) · GW(p)

reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.

Obviously. Which is why I said that the point was not any of the specific arguments in that debate - they were totally arbitrary and could just as well have been two statisticians debating the validity of different statistical approaches - but the fact that any two people can disagree about anything in the first place, as they have different models of how to interpret their observations.

"Occam's razor is true" is an entirely different thing from "I have access to universally compelling arguments for Occam's razor", as different as a raven and the abstract concept of corporate debt.

This is very close to the distinction that I have been trying to point at; thank you for stating it more clearly than I managed to. The way that I'd phrase it is that there's a difference between considering a claim to be true, and considering its justification universally compelling.

It sounds like you have been interpreting me to say something like "Occam's Razor is false because its justification is not universally compelling". That is not what I have been trying to say. Rather, my claim has been "we can consider Occam's Razor true despite its justification not being universally compelling, but because there are no universally compelling justifications, we should keep trying out different justifications and seeing whether there are any that would seem to work even better".

If you say "but that's totally in line with 'Where Recursive Justification Hits Bottom' and the standard LW canon..." then yes, it is. That's my point. Especially since 'Recursive Justification' also says that we should just decide to believe in Occam's Razor, since it doesn't seem particularly useful to do otherwise, and because practically speaking, we don't have any better alternative:

Should I trust Occam's Razor? Well, how well does (any particular version of) Occam's Razor seem to work in practice? What kind of probability-theoretic justifications can I find for it? When I look at the universe, does it seem like the kind of universe in which Occam's Razor would work well? [...]

The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use? [...]

At present, I start going around in a loop at the point where I explain, "I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct." [...]

But for now... what's the alternative to saying, "I'm going to believe that the future will be like the past on the most stable level of organization I can identify, because that's previously worked better for me than any other algorithm I've tried"? [...]

At this point I feel obliged to drag up the point that rationalists are not out to win arguments with ideal philosophers of perfect emptiness; we are simply out to win. [...]

The point is not to be reflectively consistent. The point is to win.

As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that "the brain learns to use what has worked before and what it thinks is likely to make it win in the future" is exactly what Eliezer is advocating in the above post.

But what Eliezer also advocates in that post, is not elevating any rule - Occam's Razor included - into an unquestioned axiom, but to keep questioning even that, if you can:

Being religious doesn't make you less than human. Your brain still has the abilities of a human brain. The dangerous part is that being religious might stop you from applying those native abilities to your religion—stop you from reflecting fully on yourself. People don't heal their errors by resetting themselves to an ideal philosopher of pure emptiness and reconsidering all their sensory experiences from scratch. They heal themselves by becoming more willing to question their current beliefs, using more of the power of their current mind.

This is why it's important to distinguish between reflecting on your mind using your mind (it's not like you can use anything else) and having an unquestionable assumption that you can't reflect on. [...]

The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning.

I would say that there exists two kinds of metarationality: weak and strong. Weak metarationality is just standardly compatible with standard LW rationality, because of things like framing effects and self-fulfilling beliefs, as I have been arguing in other comments. But because the standard canon has given the impression that truth should be the only criteria for beliefs and missed the fact that there are plenty of beliefs that one can choose without violating Occam's Razor, this seems "metarational" and weird. Arguably like this shouldn't be called meta/postrationality in the first place, because it's just standard rationality.

The way to phrase strong metarationality might be, is that classic LW rationality is what you get when you take a specific set of axioms as your starting point, and build on top of that. Metarationality is what you get when you acknowledge that this does indeed seem like the right thing to do most of the time, but that we should also be willing (as Eliezer advocates above) to question that, and try out different starting axioms as well, to see whether there would be any that would be even better.

In my experience, strong metarationality isn't useful because it would point to any basic axioms that would be better than LW's standard ones - if it does, I haven't found any, and the standard assumptions continue to be the most useful ones. But what does make it somewhat useful is in that when you practice questioning everything, and e.g. distinguishing between "Occam's Razor is true" and "I have assumed Occam's Razor to be true because that seems useful", then that helps in catching assumptions which don't fall directly out of the standard axioms, and which you've just assumed to be true without good justification.

E.g. "my preferred system of government is the best one" is a belief that should logically be assigned much lower confidence than "Occam's Razor is true"; but the brain only has limited precision in assigning credence values to claims. So most people have beliefs which are more like the government one than the Occam's Razor one, despite being assigned a similar level of credence as the Occam's Razor one is. By questioning and testing even beliefs which are like Occam's Razor, one can end up questioning and revising beliefs which actually should be questioned, which one might never have questioned otherwise. This is valuable even if the Occam's Razor-like beliefs survive that questioning unscathed - but the exercise does not work unless one actually does make a serious attempt to question them.

Replies from: nshepperd, nshepperd
comment by nshepperd · 2018-10-18T17:31:12.692Z · LW(p) · GW(p)

I'll have more to say later but:

The way that I’d phrase it is that there’s a difference between considering a claim to be true, and considering its justification universally compelling.

Both of these are different from the claim actually being true. The fact that Occam's razor is true is what causes the physical process of (occamian) observation and experiment to yield correct results. So you see, you've already managed to rephrase what I've been saying into something different by conflating map and territory.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-10-19T13:25:55.246Z · LW(p) · GW(p)

Indeed, something being true is further distinct from us considering it true. But given that the whole point of metarationality is fully incorporating the consequences of realizing the map/territory distinction and the fact that we never observe the territory directly (we only observe our brain's internal representation of the external environment, rather than the external environment directly), a rephrasing that emphazises the way that we only ever experience the map seemed appropriate.

comment by nshepperd · 2018-10-19T00:54:22.820Z · LW(p) · GW(p)

As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that “the brain learns to use what has worked before and what it thinks is likely to make it win in the future” is exactly what Eliezer is advocating in the above post.

Even if true, this is different from "epistemic rationality is just instrumental rationality"; as different as adaptation executors are from fitness maximisers.

Separately, it's interesting that you quote this part:

The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning.

Because it seems to me that this is exactly what advocates of "postrationality" here are not doing, when they take the absence of universally compelling arguments as license to dismiss rationality and truth-based arguments against their positions.¹

Eliezer also says this:

Always apply full force, whether it loops or not—do the best you can possibly do, whether it loops or not—and play, ultimately, to win.

It seems to me that applying full force in criticism of postrationality amounts to something like the below:

"Indeed, compellingness-of-story, willingness-to-life, mythic mode [LW · GW], and many other non-evidence-based criteria are alternative criteria which could be used to select beliefs. However we have huge amounts of evidence (catalogued in the Sequences, and in the heuristics and biases literature) that these criteria are not strongly correlated to truth, and therefore will lead you to holding wrong beliefs, and furthermore that holding wrong beliefs is instrumentally harmful, and, and [the rest of the sequences, Ethical Injunctions, etc]..."

"Meanwhile, we also have vast tracts of evidence that science works, that results derived with valid statistical methods replicate far more often than any others, that beliefs approaching truth requires accumulating evidence by observation. I would put the probability that rational methods are the best criteria I have for selecting beliefs at . Hence, it seems decisively not worth it to adopt some almost certainly harmful 'postrational' anti-epistomology just because of that probability. In any case, per Ethical Injunctions, even if my probabilities were otherwise, it would be far more likely that I've made a mistake in reasoning than that adopting non-rational beliefs by such methods would be a good idea."

Indeed, much of the Sequences could be seen as Eliezer considering alternative ways of selecting beliefs or "viewing the world", analyzing these alternative ways, and showing that they are contrary to and inferior to rationality. Once this has been demonstrated, we call them "biases". We don't cling to them on the basis that "we can't know the criterion of truth".

Advocates of postrationality seem to be hoping that the fact that P(Occam's razor) < makes these arguments go away. It doesn't work like that. P(Occam's razor) = at most makes of these arguments go away. And we have a lot of evidence for Occam's razor.

¹ As gworley seems to do here [LW(p) · GW(p)] and here [LW(p) · GW(p)] seemingly expecting me to provide a universally compelling argument in response.

Replies from: Kaj_Sotala, gworley
comment by Kaj_Sotala · 2018-10-19T12:28:21.729Z · LW(p) · GW(p)

Advocates of postrationality seem to be hoping that the fact that P(Occam's razor) < 1 makes these arguments go away. It doesn't work like that.

This (among other paragraphs) is an enormous strawman of everything that I have been saying. Combined with the fact that the general tone of this whole discussion so far has felt adversarial rather than collaborative, I don't think that I am motivated to continue any further.

Replies from: nshepperd
comment by nshepperd · 2018-10-19T18:34:30.674Z · LW(p) · GW(p)

It doesn't seem to be a strawman of what eg. gworley and TAG have been saying, judging by the repeated demands for me to supply some universally compelling "criterion of truth" before any of the standard criticisms can be applied. Maybe you actually disagree with them on this point?

It doesn't seem like applying full force in criticism is a priority for the 'postrationality' envisioned by the OP, either, or else they would not have given examples (compellingness-of-story, willingness-to-life) so trivial to show as bad ideas using standard arguments.

Replies from: gworley, TAG
comment by Gordon Seidoh Worley (gworley) · 2018-10-19T19:56:51.608Z · LW(p) · GW(p)

I agree with Kaj on this point, however I also don't think you're intentionally trying to respond to a strawman version of what we're presenting; what we're arguing for hinges on what seems to be a subtle point for most people (it doesn't feel subtle to me but I am empathetic to technical philosophical positions being subtle to other people), so it's easy to conflate our position with, say, postmodernist-style epistemic relativism, since although it's drastically different than that it's different for technical reasons that may not be apparent from reading the broad strokes of what we're saying.

I suspect what's going on in this discussion is something like the following: me, Kaj, TAG, and others are coming from a position that relatively small in idea space, but there's other ideas that sort-of pattern match if you don't look too close at the details that are getting confused for the point we're trying to make, and then people respond to these other ideas rather than the one we're holding. Although we're trying our best to cut idea space such that you see the part we're talking about, the process is inexact because although I've pointed to it with the technical language of philosophy the technical language of philosophy is easily mistaken for non-technical language since it reused common words (physics sometimes has the same problem: you pick a word because it's a useful metaphor but give it a technical meaning, and then people misunderstand because they think too much in terms of the metaphor and not in terms of the precise model being referred to by the word) and requires a certain about of fluency with philosophy in general. For example, in all the comments on this post, I think so far only jessicata has asked for clarification in a way that clearly is framed in terms of technical philosophy.

This is not to necessarily demand that you engage with technical philosophy if you don't want to, but it is I suspect why we continue to have trouble communicating (or if there are other reasons this is a major one). I don't know a way to explain these points that isn't in that language and not also easily confused for other ideas I wouldn't endorse, though, so there may not be much way forward in presenting metarationality to you in a way that I would agree that you understand it and allows you to express a rejection I would consider valid (if indeed such a reason for rejection exists; if I knew one I wouldn't hold these views!). The only other ways we have of talking about these things tend to rely much more on appeal to intuitions that you don't seem to share, and transmitting those intuitions is a separate project from what I want to do, although Kaj's and others' responses do a much better job than mine of attempting that transmission.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-19T20:38:25.566Z · LW(p) · GW(p)

Although we’re trying our best to cut idea space such that you see the part we’re talking about, the process is inexact because although I’ve pointed to it with the technical language of philosophy the technical language of philosophy is easily mistaken for non-technical language since it reused common words

I am sympathetic to this sort of explanation. Could you, then, note specifically which of your terms are supposed to be interpreted at technical language, and link to some definitions / explanations of them? (Can such be found on the SEP, for instance?)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-19T20:40:28.155Z · LW(p) · GW(p)

Nope, this is explicitly what I wanted to avoid doing, although I note I've already been sucked in way deeper into this than I ever meant to be.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-19T20:54:42.680Z · LW(p) · GW(p)

But… why would you want to avoid this? (Surely it’s not difficult to post a link?)

comment by TAG · 2018-11-01T13:22:18.328Z · LW(p) · GW(p)

I did not ask for a universally compelling argument: you brought that in.

Trying to solve problems by referring to the Sequences has a way of leading to derailment: people match the topic at hand to which ever of Yudkowsky's writings is least irrelevant, even if it is not relevant enough to be on the same topic.

comment by Gordon Seidoh Worley (gworley) · 2018-10-19T03:12:12.135Z · LW(p) · GW(p)

Hmm, I think there is some kind of category error happening that you think I'm asking for universally compelling arguments because I agree they don't and can't exist as a straightforward corollary of epistemic circularity. You might feel that I do though because I think if you assume to know the criterion of truth or to be able to learn it this would be equivalent to saying you could find a universally compelling argument, because this is exactly the positivist stance. If you disagree then I suspect whatever disagreement we have has become extremely esoteric since I don't see a natural space into which you could claim the criterion of truth is knowable and that there are no universally compelling arguments.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-19T03:48:10.217Z · LW(p) · GW(p)

I’m not nshepperd, but:

The non-existence of universally compelling arguments has nothing to do with whether “the criterion of truth is knowable”, or “epistemic circularity”, or any other abstruse epistemic issues, or any other non-abstruse epistemic issues.

There cannot be a universally compelling argument because for any given argument, there can exist a mind which is not persuaded by it.

If it were the case that “the criterion of truth is knowable” (whatever that means), and you had what you considered to be a universally compelling argument, I could still build a mind which remains—stubbornly, irrationally (?), impenetrably—unconvinced by that argument. And that would make that argument not universally compelling after all.

There is nothing esoteric about any of this; Eliezer explained it all very clearly in the Sequences.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-19T18:14:30.116Z · LW(p) · GW(p)
The non-existence of universally compelling arguments has nothing to do with whether “the criterion of truth is knowable”, or “epistemic circularity”, or any other abstruse epistemic issues, or any other non-abstruse epistemic issues.
There cannot be a universally compelling argument because for any given argument, there can exist a mind which is not persuaded by it.

This feels to me similar to saying "don't worry about all that physics telling us we can't travel faster than light, we have engineering reasons to think we can't do it" as if this were a dismissal of the former when it's in fact an expression of it. Further, Eliezer doesn't really prove his point in that post if you want a detailed philosophical explanation of the point. Instead, as is often the case, Eliezer is smart and manages to come to a conclusion consistent with the philosophical details despite making arguments at a level where it's not totally clear he can support the claims he's making (which is fine because he wasn't writing to do that, but it does make his words on the subject less relevant here because they're talking to a different level of abstraction).

Thus, it seems that you're just agreeing with me even if you're talking at a different level of abstraction, but I take it from your tone you meant to disagree, so maybe you meant to press some other point that's not clear to me from what you wrote?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-19T19:43:17.024Z · LW(p) · GW(p)

The reason I cited is not an “engineering reason”; it is fundamental. It seems absurd to say that it’s “an expression of” something like “epistemic circularity”. A more apt analogy would be to computability theory. If we make some assertion in computer science, and in support of that assertion, prove that we can, or cannot, construct some particular sort of computer program, is that an “engineering reason”? Applying such a term seems tendentious, at best.

Further, Eliezer doesn’t really prove his point in that post if you want a detailed philosophical explanation of the point.

If you disagree with Eliezer’s arguments in that post, I would be interested in reading what you have to say (as would others, I am sure).

Thus, it seems that you’re just agreeing with me even if you’re talking at a different level of abstraction, but I take it from your tone you meant to disagree, so maybe you meant to press some other point that’s not clear to me from what you wrote?

You said:

I don’t see a natural space into which you could claim the criterion of truth is knowable and that there are no universally compelling arguments

The phrasing is odd (“natural space”? “into”?), but unless there is some very odd meaning hiding behind that phrasing, what you seem to be saying is that if “the criterion of truth is knowable” then there must exist universally compelling arguments. (Because ¬(P ∧ ¬Q) => (P -> Q).)

And I am saying: this is wrong and confused. If “the criterion of truth is knowable”, that has exactly zero to do with whether there exist universally compelling arguments. Criterion of truth or no criterion of truth, I can always build a mind which fails to be convinced by any given argument you propose. Therefore, any argument you propose will fail to be universally compelling.

This is what Eliezer was saying. It is very simple. If you disagree with this reasoning, do please explain why! (And in that case it would be best, I think, if you posted your disagreement as a comment to Eliezer’s post. I will, of course, gladly read it.)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-19T20:38:42.648Z · LW(p) · GW(p)
And I am saying: this is wrong and confused. If “the criterion of truth is knowable”, that has exactly zero to do with whether there exist universally compelling arguments. Criterion of truth or no criterion of truth, I can always build a mind which fails to be convinced by any given argument you propose. Therefore, any argument you propose will fail to be universally compelling.

So I don't disagree with Eliezer's post at all; I'm saying he doesn't give a complete argument for the position. It seems to me the only point of disagreement is that you think knowability of the criterion of truth does not imply the existence of universally compelling arguments, so let me spell that out. This is to say, why is it that you can build a mind that fails to be convinced by any given argument, because Eliezer only intimates this and doesn't fully explain it.

Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-independent argument for the truth value of all statements that would convince even rocks.

Since it seems we're all in agreement C does not exist, I think any disagreement we have lingering is about something other than the point I originally laid out.

Also, for what it's worth since you bring up computability theory, knowing the criterion of truth would also imply being able to solve the halting problem since you could always answer the question "does this program halt?".

(Also, I love the irony that I may fail to convince you because no argument is universally compelling!)

Replies from: SaidAchmiz, gworley, clone of saturn
comment by Said Achmiz (SaidAchmiz) · 2018-10-19T20:51:54.044Z · LW(p) · GW(p)

Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/​algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling

But of course it wouldn’t. What? This seems completely unrelated to compellingness (universal or otherwise). I have but to build a mind that does not implement the procedure in question, or doesn’t implement it for some specific argument(s), or does implement it but then someone reverses it (cf. Eliezer’s “little grey man”), etc.

a mind-independent argument for

There is no such thing as a “mind-independent argument for” anything. That, too, was Eliezer’s point.

For example, suppose C exists. However, it is then an open question where I believe that C exists. How might I come to believe this? Perhaps I might be presented with an argument for C’s existence. I might find this argument compelling, or not. This is dependent on my mind—i.e., both on my mind existing, and on various specific properties of my mind (such as implementing modus ponens).

And who is doing this attempted convincing? Well, perhaps you are. You believe (in this hypothetical scenario) that C exists. And how did you come to believe this? Whatever the chain of causality was that led to this state of affairs, it could only be very much dependent on various properties of your mind.

Again, a “mind-independent argument” for anything is a nonsensical concept. Who is arguing, and with whom? Who is trying to convince whom? Without minds, the very concept of there being arguments, and those arguments being compelling or not compelling, is meaningless.

This is to say, why is it that you can build a mind that fails to be convinced by any given argument, because Eliezer only intimates this and doesn’t fully explain it.

But he does. He explains it very clearly and explicitly! Building a mind that behaves in some specific way in some specific circumstance(s) is all that’s required. Simply build a mind that, when presented with argument A, finds that argument unconvincing. (Again, see the “little grey man” section.) That is all.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-19T22:04:25.767Z · LW(p) · GW(p)

Yes, exactly, you get it. I'm not sure what confusion remains or you think remains. The only point seems here:

But of course it wouldn’t. What? This seems completely unrelated to compellingness (universal or otherwise). I have but to build a mind that does not implement the procedure in question, or doesn’t implement it for some specific argument(s), or does implement it but then someone reverses it (cf. Eliezer’s “little grey man”), etc.

The counterfactual I'm proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we're all in agreement that the existence of such a thing is extremely unlikely even though we can't formally prove that it doesn't exist.

Replies from: nshepperd, SaidAchmiz
comment by nshepperd · 2018-10-19T23:29:01.944Z · LW(p) · GW(p)

It seems that you don't get it. Said just demonstrated that even if C exists it wouldn't imply a universally compelling argument.

In other words, this:

Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/​algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-independent argument for the truth value of all statements that would convince even rocks.

appears to be a total non sequitur. How does the existence of an algorithm enable you to convince a rock of anything? At a minimum, an algorithm needs to be implemented on a computer... Your statement, and therefore your conclusion that C doesn't exist, doesn't follow at all.

(Note: In this comment, I am not claiming that C (as you've defined it) exists, or agreeing that it needs to exist for any of my criticisms to hold.)

Replies from: TAG, gworley
comment by TAG · 2018-11-01T12:49:44.175Z · LW(p) · GW(p)

It seems that you don’t get it. Said just demonstrated that even if C exists it wouldn’t imply a universally compelling argument.

So what? Neither the existence or non existence of a Criterion of Truth that is persuasive to our minds is implied by the (non) existence of universally compelling arguments. The issue of universally compelling arguments is a red herring.

comment by Gordon Seidoh Worley (gworley) · 2018-10-20T01:26:19.659Z · LW(p) · GW(p)

See my other comment [LW(p) · GW(p)], but assuming to know something about how to compute C would just already be part of C by definition. It's very hard to talk about the criterion of truth without accidentally saying something that implies it's not true because it's an unknowable thing we can't grasp onto. C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That's definitionally what it means to be able to know the criterion of truth.

That you want to deny C is great, because I think (as I'm finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.

Replies from: nshepperd
comment by nshepperd · 2018-10-20T02:43:01.580Z · LW(p) · GW(p)

C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.

That's not how algorithms work and seems... incoherent.

That you want to deny C is great,

I did not say that either.

because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.

No, I don't think we do agree. It seems to me you're deeply confused about all of this stuff.

Here's an exercise: Say that we replace "C" by a specific concrete algorithm. For instance the elementary long multiplication algorithm used by primary school children to multiply numbers.

Does anything whatsoever about your argument change with this substitution? Have we proved that we can explain multiplication to a rock? Or perhaps we've proved that this algorithm doesn't exist, and neither do schools?

Another exercise: suppose, as a counterfactual, that Laplace's demon exists, and furthermore likes answering questions. Now we can take a specific algorithm C: "ask the demon your question, and await the answer, which will be received within the minute". By construction this algorithm always returns the correct answer. Now, your task is to give the algorithm, given only these premises, that I can follow to convince a rock that Euclid's theorem is true.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-20T02:53:40.246Z · LW(p) · GW(p)

Given that I still think after all this trying that you are confused and that I never wanted to put this much work into the comments on this post, I give up trying to explain further as we are making no progress. I unfortunately just don't have the energy to devote to this right now to see it through. Sorry.

comment by Said Achmiz (SaidAchmiz) · 2018-10-19T22:31:04.755Z · LW(p) · GW(p)

The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.

Ok, this is… far weirder than anything I thought you had in mind when you talked about the “knowability of the criterion of truth”. As far as I can tell, this scenario is… incoherent. Certainly it’s extremely bizarre. I guess you agree with that part, at least.

But… what is it that you think the non-reality of this scenario implies? How do you get from “our universe is not, in fact, at all like this bizarre possibly-incoherent hypothetical scenario” to… anything about rationality, in our universe?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-20T01:30:20.349Z · LW(p) · GW(p)

Well if you don't have C, then you have to build up the truth some other way because you don't have the ability to ground yourself directly in it because truth exists in the map rather than the territory. So then you are left to ground yourself in what you do find in the territory, and I'd describe the thing you find there as telos or will rather than truth because it doesn't really look like truth. Truth is a thing we have to create for ourselves rather than extract. The rest follows from that.

comment by Gordon Seidoh Worley (gworley) · 2018-10-19T20:45:34.586Z · LW(p) · GW(p)

Sorry, I mean to say "A is a mind-independent argument for the truth value of P and there exists by our construction such an A for all P that would convince even rocks".

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-19T20:53:47.099Z · LW(p) · GW(p)

How would you convince rocks?! What in the world does that have to do with there existing or not existing some observable procedure that shows whether something is true?

comment by clone of saturn · 2018-10-19T21:06:56.739Z · LW(p) · GW(p)

How would you tell if you had convinced a rock of something? Why is it important whether or not you can convince a rock of something?

Eliezer uses "convincing a rock" as a self-evidently absurd reductio, but it sounds like you don't actually see it that way?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-19T22:08:00.280Z · LW(p) · GW(p)
Eliezer uses "convincing a rock" as a self-evidently absurd reductio, but it sounds like you don't actually see it that way?

Yep, I agree, which is why I point it out as something absurd that would be true if the counterfactual existence of C were instead factual.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-19T22:38:21.921Z · LW(p) · GW(p)

But you’ve done a sleight of hand!

First, you defined C, a.k.a. the “criterion of truth”, like this:

Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/​algorithm to assess if any given statement is true.

Ok, that’s only mildly impossible, let’s see where this leads us…

But then, you say:

The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.

Why should the thing you defined in the first quote, lead to anything even remotely resembling the second quote? There is no reason, as far as I can tell; the latter quote just adds extremely impossible magic, out of nowhere and for no reason.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-20T01:16:28.895Z · LW(p) · GW(p)
There is no reason, as far as I can tell; the latter quote just adds extremely impossible magic, out of nowhere and for no reason.

I'm saying the thing in the first quote, saying C exists, is the extremely impossible magic. I guess I don't know how to convey this part of the argument any more clearly, as it seems to me to follow directly and objections I can think of to it hinge on assuming things you would know contingent on what you think about C and thus are not admissible here.

Maybe it would help if I gave an example? Let's say C exists. Okay, great, now we can tell if things are true independent of any mind since C is a real fact of the world, not a belief (it's part of the territory). Now I can establish as a matter of fact (or rather we have no way to express this correctly, but the fact can be established independent of any subject) whether or not the sky is blue independent of any observer because there is an argument contingent on C which tells us whether the statement "the sky is blue" is true or false. Now this statement is true or false in the territory and not in necessarily in any map. We'd say this is a realist position rather than an anti-realist one. This would have to mean then that this fact would be true for anything we might treat as a subject of which we could ask "does X know the fact of the matter about whether or not the sky is blue". Thus we could ask if a rock knows whether or not the sky is blue and it would be a meaningful question about a matter of fact and not a category error like it is when we deny the knowability of C because then we have taken an anti-realist position. This is what I'm trying to say about saying there are universally compelling arguments if we assume C: the truth of matters then shifts from existing in the map to existing in the territory, and so now there can be universally compelling arguments for things that are true even if the subject is too dumb to understand them they will still be true for them regardless.

I'm not sure that helps but that's the best I can think up right now.

Replies from: ESRogs
comment by ESRogs · 2018-10-21T18:29:08.947Z · LW(p) · GW(p)

I'm also a bit confused about your definition of C.

Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/​algorithm to assess if any given statement is true.

Suppose there exists a special magic eight ball that shows the word "true" or "false" when you shake it after making any statement, and that it always gives the correct answer.

Would you agree that use of this special magic eight ball represents a "procedure/​algorithm to assess if any given statement is true", and so anyone who knows how to use the magic eight ball knows the criterion of truth?

If so, I don't see how you get from there to saying that a rock must be convinced, or really that anyone must therefore be convinced of anything.

Just because there exists a procedure for assessing truth (absolutely correctly), doesn't therefore mean that everyone uses that procedure, right?

Suppose that Alice has never seen nor heard of the magic eight ball, and does not know it exists. Just the fact that it exists doesn't imply anything about her state of mind, does it?

Was there supposed to be some part of the definition of C that my magic eight ball story doesn't capture, which implies that it represents a universally compelling argument?

Just being able to give the correct answer to any yes/no question does not seem like it's enough to be universally compelling.

EDIT: If the hypothetical was not A) "there exists... a procedure to (correctly) assess if any given statement is true", but rather B) "every mind has access to and in fact uses a procedure that correctly assesses if any given statement is true", then I would agree that the hypothetical implies universally compelling arguments.

Do you mean to be supposing B rather than A when you talk about the hypothetical criterion of truth?

comment by Said Achmiz (SaidAchmiz) · 2018-10-17T17:22:38.924Z · LW(p) · GW(p)
  • Do you think about distances in Metric or Imperial units? Both are equally true, so probably in whichever units you happen to be more fluent in.
  • Do you use Newtonian mechanics or full relativity for calculating the motion of some object? Relativity is more true, but sometimes the simpler model is good enough and easier to calculate, so it may be better for the situation.

These seem like silly examples to me.

I think about distances in Imperial units, but it seems very weird, inaccurate, and borderline absurd to describe me as believing the Imperial system to be “true”, or “more true”, or believing the metric system to be “not true” or “false” or “less true”. None of those make any sense as descriptions of what I believe. Frankly, I don’t understand how you can suggest otherwise.

Similarly, it is a true fact that Newtonian mechanics allows me to calculate the motion of objects, in certain circumstances (i.e., intermediate-scale situations / phenomena), to a great degree of accuracy, but that relativity will give a more accurate result, at the cost of much greater difficulty in calculation. This is a fact which I believe to be true. Describing Relativity as being “more true” is odd.

  • Do you consider your romantic partner a wonderful person who you love dearly and want to be happy, or someone who does various things that benefit you, in exchange for you doing various things that benefit them? Both are true, but the former framing is probably one that will make for a happier and more meaningful relationship.

If both are true (as, indeed, they are, in many relationships), then this, too, seems like an odd example. Why choose? These are not in conflict. Why can’t someone be a wonderful person whom you love dearly and want to be happy, and who does various things that benefit you, in exchange for you doing various things that benefit them? I struggle to see any conflict or contradiction.

Meaning no disrespect, Kaj, but I spy a motte-and-bailey approach in these sorts of examples. The motte, of course, is “Newtonian mechanics” and so on. The bailey is “mythic mode”. To call the latter “indefensible” is an understatement.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-10-18T07:55:48.185Z · LW(p) · GW(p)

If both are true (as, indeed, they are, in many relationships), then this, too, seems like an odd example. Why choose? These are not in conflict. Why can’t someone be a wonderful person whom you love dearly and want to be happy, and who does various things that benefit you, in exchange for you doing various things that benefit them? I struggle to see any conflict or contradiction.

That's my point. That none of this stuff about choosing beliefs is in conflict with standard LW rationality, and that there are plenty of situations where you can just look at the world in one way or the other, and both are equally true, and you just focus on one based on whichever is the most useful for the situation. If you say that "these are not in conflict", then yes! That is what I have been trying to say! It's not true that this is a "poisonous philosophy", because this is mostly just a totally ordinary thing that everyone does every day and which is totally unproblematic!

Someone might then respond, "well if it's so ordinary, what's this whole thing about post/metarationality being totally different from ordinary rationality, then?" Honestly, beats me. I don't think it really is particularly different, and giving it a special label that implies that it's anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that's the label we seem to have ended up with.

Meaning no disrespect, Kaj, but I spy a motte-and-bailey approach in these sorts of examples. The motte, of course, is “Newtonian mechanics” and so on. The bailey is “mythic mode”. To call the latter “indefensible” is an understatement.

This is difficult to answer, because just as there are many things going under the label "rational" - some of which are decidedly less rational than others - there are also many ways in which you could think of mythic mode, even if you only limited yourself to different ways of interpreting Val's post on the topic. Without getting deeper into that topic, I'll just say that there are ways of interpreting mythic mode which I think are perfectly in line with the kinds of examples I've been giving in the comments of this post, and also ways of interpreting it which are not and which are just crazy.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-18T08:02:04.881Z · LW(p) · GW(p)

none of this stuff about choosing beliefs is in conflict with standard LW rationality

What do you mean, “choosing beliefs”? The bit of my comment that you quoted said nothing about choosing beliefs. The situation I describe doesn’t seem to require “choosing beliefs”. You just believe what is, to the best of your ability to discern, true. That’s all. What “choosing” is there?

Someone might then respond, “well if it’s so ordinary, what’s this whole thing about post/​metarationality being totally different from ordinary rationality, then?” Honestly, beats me. I don’t think it really is particularly different, and giving it a special label that implies that it’s anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that’s the label we seem to have ended up with.

Maybe what you’re talking about is different from what everyone else who is into “postrationality”, or what have you, is talking about?

Without getting deeper into that topic, I’ll just say that there are ways of interpreting mythic mode which I think are perfectly in line with the kinds of examples I’ve been giving in the comments of this post

But… I think that your examples are examples of the wrong way to think about things… “crazy” is probably an overstatement for your comments (as opposed to those of some other people), but “wrong” does not seem to be…

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-18T17:46:55.430Z · LW(p) · GW(p)
"Someone might then respond, “well if it’s so ordinary, what’s this whole thing about post/​metarationality being totally different from ordinary rationality, then?” Honestly, beats me. I don’t think it really is particularly different, and giving it a special label that implies that it’s anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that’s the label we seem to have ended up with."
Maybe what you’re talking about is different from what everyone else who is into “postrationality”, or what have you, is talking about?

(sorry; I can't seem to nest blockquotes in the comments; that's the best I could do)

For myself I find this point is poorly understood by most self-identified rationalists, and I think most people reading the sequences come out of them as positivists because Eliezer didn't hammer the point home hard enough and positivism is the default within the wider community of rationality-aligned folks (e.g. STEM folks). I wish all this disagreement were just a simple matter politics over who gets to use what names, but it's not because there's a real disagreement over epistemology. Given that "rationality" was always a term that was bound to get conflated with the rationality of high modernism, it's perhaps not surprising that those of us who got fed up with the positivists ended up giving ourselves a new name.

This is made all the more complicated because Eliezer does specifically call out positivism as a failure mode, so it makes pinning people down on this all the more tricky because they can just say "look, Eliezer said rationality is not this". As the responses to this post make clear, though, the positivist streak is alive and well in the LW community given what I read as a strong reaction against the calling out of positivism or for that matter privileging any particular leap of faith (although positivists don't necessarily think of themselves as doing that because they disagree with the premise that we can't know the criterion of truth). So this all leads me to the position that we have need of a distinction for now because of our disagreement on this fundamental issue that has many effects on what is and is not considered to be useful to our shared pursuits.

Replies from: Kaj_Sotala, SaidAchmiz
comment by Kaj_Sotala · 2018-10-21T10:37:26.289Z · LW(p) · GW(p)

For myself I find this point is poorly understood by most self-identified rationalists, and I think most people reading the sequences come out of them as positivists because Eliezer didn't hammer the point home hard enough and positivism is the default within the wider community of rationality-aligned folks (e.g. STEM folks).

Maybe so, but I can't help noticing that whenever I try to think of concrete examples about what postrationality implies in practice, I always end up with examples that you could just as well justify using the standard rationalist epistemology. E.g. all my examples in this comment section. So while I certainly agree that the postrationalist epistemology is different from the standard rationalist one, I'm having difficulties thinking of any specific actions or predictions that you would really need the postrationalist epistemology to justify. Something like the criterion of truth is a subtle point which a lot of people don't seem to get, yes, but it also feels like one which doesn't make any practical difference whether you get it or not. And theoretical points which people can disagree a lot about despite not making any practical difference are almost the prototypical example of tribal labels. John Tooby:

The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities.

comment by Said Achmiz (SaidAchmiz) · 2018-10-18T17:54:47.496Z · LW(p) · GW(p)

(sorry; I can’t seem to nest blockquotes in the comments; that’s the best I could do)

Not related to your points, but re: blockquotes and nesting, try the GreaterWrong editor; you can select some text and click the blockquote button, then select text (including the blockquoted) and click blockquote again, etc., and it’ll nest it properly for you.

comment by Gordon Seidoh Worley (gworley) · 2018-10-16T01:57:43.734Z · LW(p) · GW(p)
The only sensible response to the problem of induction is to do our best to track the truth anyway.

I just want to make clear that's exactly what Kaj and I are saying to do. Our caveat is that it's not the only thing you can do because it's not the only thing you do do even if you wanted desperately with all your heart for it to be otherwise.

Everybody who comes up with some clever reason to avoid doing this thinks they've found some magical shortcut, some powerful yet-undiscovered tool (dangerous in the wrong hands, of course, but a rational person can surely use it safely...). Then they cut themselves on it.

This also seems to be missing the point; we're specifically saying that we think that things that rationalist think are not magical instead are magical (assuming to know the criterion of truth) and because of this you can't make assumptions strong enough to directly go after the truth without contradiction.

comment by TAG · 2018-11-01T12:33:44.330Z · LW(p) · GW(p)

I don’t know what “directly” means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.

Inasmuch as you are looking with your eyes, that would be tracking appearance. What you don't have is a way of checking whether the ultimate causes of your sense data, in reality, are what you think they are.

comment by Richard_Kennaway · 2018-10-17T22:10:40.664Z · LW(p) · GW(p)
Because there's no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality.

However, there are causal pathways through which we can evaluate whether or not our brains are tracking reality. They have been extensively written about on LessWrong over the years, and a large amount of the core material is collected in a book.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-18T01:26:20.511Z · LW(p) · GW(p)
However, there are causal pathways through which we can evaluate whether or not our brains are tracking reality. They have been extensively written about on LessWrong over the years, and a large amount of the core material is collected in a book.

For some, indeed most, parts of our brains, this seems true, but the point is meant to be not all and not reliably enough that we are left with some doubt about what's really going on.

comment by Gordon Seidoh Worley (gworley) · 2018-10-15T18:49:10.306Z · LW(p) · GW(p)

FWIW, an interesting counter-question is to ask you, nshepperd, to provide the criterion of truth, or at least how one might find it. I'll warn you in advance, though, that Eliezer never adequately addresses this question in his writing because he is a pragmatist and so cuts off the line of inquiry before he'd have to address this question, thus appealing to him in insufficient (not that you necessarily would, but you've been linking him heavily and I want to cut short the need for a round of conversation where we get over this hurdle).

I'll also say I have no problem with pragmatism per-se, and in fact I say as much in another comment because pragmatism is how you get on with living despite great doubt, but if you choose to go deeper on questions of epistemology than a pragmatic approach may at the moment demand, you're forced to grapple with the program of the criterion head-on.

Don't feel pressured to do this though; I just think you'll find it an interesting exercise to try to pin it down and might gain some insight from it into the postrationalist worldview.

Replies from: SaidAchmiz, nshepperd
comment by Said Achmiz (SaidAchmiz) · 2018-10-15T19:22:50.437Z · LW(p) · GW(p)

Supposing for the sake of argument that we agree with your view (that Eliezer’s perspective is insufficient to constitute a criterion of truth—a view I disagree with, but that is not the point of this comment), the question arises:

Why would we want to “go deeper on questions of epistemology than a pragmatic approach may at the moment demand”? By definition, it would seem, there is no practical (i.e. instrumental, i.e. pragmatic) reason to do so. Why, then?

(This is, of course, simply another way of saying “what do I gain from going ‘postrationalist’?”, which is a question I’ve never seen satisfactorily answered.)

P.S.: This isn’t just “the problem of induction”, a.k.a. “but you can never be 100% certain!”, is it? Surely that has been adequately dealt with, on Less Wrong…

Replies from: gworley, TAG
comment by Gordon Seidoh Worley (gworley) · 2018-10-15T19:38:06.073Z · LW(p) · GW(p)
Why would we want to “go deeper on questions of epistemology than a pragmatic approach may at the moment demand”? By definition, it would seem, there is no practical (i.e. instrumental, i.e. pragmatic) reason to do so. Why, then?

I'm careful above to say "than a pragmatic approach may at the moment demand". Pragmatism has no universal ground to stand on: it's always pragmatic to the task at hand. I have a need/interest to go deeper, but others may not so they do not and that's fine; it only means that they bottom out where I want/need to go deeper, same as I take a pragmatic approach to understanding biochemistry or fluid dynamic and bottom out my inquiry much sooner than would a pharmacologist or an aeronautical engineer, respectively.

Conversely, if you start playing the "why go deeper, what's the practical reason" game, you'll quickly find there's little reason for this site or any of the activity on it to exist since, after all, you can live just fine without it (and much else besides), but since people are interested or find themselves with a need to know more to serve some end they have, we're here anyway.

P.S.: This isn’t just “the problem of induction”, a.k.a. “but you can never be 100% certain!”, is it? Surely that has been adequately dealt with, on Less Wrong…

Yep, that's another guise this problems takes, and no we haven't (and cant'!) fully solve it, although some great work has been done on figuring out how to better address this problem formally.

comment by TAG · 2018-10-16T07:39:41.609Z · LW(p) · GW(p)

Why would we want to “go deeper on questions of epistemology than a pragmatic approach may at the moment demand”?

That depends who we are as an individual. Some individuals terminally value epistemic truth. Some individuals have made strident claims to know the truth.

comment by nshepperd · 2018-10-16T00:49:56.100Z · LW(p) · GW(p)
comment by Gordon Seidoh Worley (gworley) · 2018-10-15T18:37:28.289Z · LW(p) · GW(p)
Why would you think "magic access" is required? It seems to me the ordinary non-magic causal access granted by our senses works just fine.

I'll let Kaj say more, but in short it becomes a logical necessity to ground the line of reasoning without introducing self-contradiction or enough freedom that you can say P=~P and not straying into saying something isomorphic to the postrationalist position.

comment by Gordon Seidoh Worley (gworley) · 2018-10-15T02:49:05.443Z · LW(p) · GW(p)
We can't assess whether things are true with 100% reliability, of course. The dark lords of the matrix could always manipulate your mind directly and make you see something false. They could be doing this right now. But so what? Are you going to tell me that we can assess 'telos' with 100% reliability? That we can somehow assess whether it is true that believing something will help fulfill some purpose, with 100% reliability, without knowing what is true?

Of course not, and that's the point.

The problem with assessing beliefs or judgements with anything other than their truth is exactly that the further your beliefs are from the truth, the less accurate any such assessments will be. Worse, this is a vicious positive feedback loop if you use these erroneous 'telos' assessments to adopt further beliefs, which will most likely also be false, and make your subsequent assessments even more inaccurate.

Indeed, which is why metarationality must not forget to also include all of rationality within it!

People used to come to this site all the time complaining about the warning about politics: Politics is the Mind-Killer. They would say "for ordinary people, sure it might be dangerous, but we rationalists should be able to discuss these things safely if we're so rational", heedless of the fact that the warning was meant notfor ordinary people, but for rationalists. The message was not "if you are weak, you should avoid this dangerous thing; you may demonstrate strength by engaging the dangerous thing and surviving" but "you are weak; avoid this dangerous thing in order to become strong".

To say a little more on danger, I mean dangerous to the purpose of fulfilling your own desires. Unlike politics, which is an object-level danger you are pointing to, postrationality is a metalevel danger, but specifically because it's a more powerful set of tools rather than a shiny thing people like to fight over. This is like the difference between being weary of generally unsafe conditions that cannot be used and dangerous tools that are only dangerous if used by the unskilled.

Replies from: Richard_Kennaway, nshepperd, Pattern
comment by Richard_Kennaway · 2018-10-15T11:33:12.207Z · LW(p) · GW(p)
Indeed, which is why metarationality must not forget to also include all of rationality within it!

In practice this converges on "Embrace, extend, and extinguish".

comment by nshepperd · 2018-10-16T00:33:59.698Z · LW(p) · GW(p)

Of course not, and that’s the point.

The point... is that judging beliefs according to whether they achieve some goal or anything-- is no more reliable than judging beliefs according to whether they are true, is in no way a solution to the problem of induction or even a sensible response to it, and most likely only makes your epistemology worse?

Indeed, which is why metarationality must not forget to also include all of rationality within it!

Can you explain this in a way that doesn't make it sound like an empty applause light? How can I take compellingness-of-story into account in my probability estimates without violating the Kolmogorov axioms?

To say a little more on danger, I mean dangerous to the purpose of fulfilling your own desires.

Yes, that's exactly the danger.

Unlike politics, which is an object-level danger you are pointing to, postrationality is a metalevel danger, but specifically because it’s a more powerful set of tools rather than a shiny thing people like to fight over. This is like the difference between being weary of generally unsafe conditions that cannot be used and dangerous tools that are only dangerous if used by the unskilled.

Thinking you're skilled enough to use some "powerful but dangerous" tool is exactly the problem. You will never be skilled enough to deliberately adopt false beliefs without suffering the consequences.

Ethical Injunctions:

But surely… if one is aware of these reasons… then one can simply redo the calculation, taking them into account. So we can rob banks if it seems like the right thing to do after taking into account the problem of corrupted hardware and black swan blowups. That’s the rational course, right?

There’s a number of replies I could give to that.

I’ll start by saying that this is a prime example of the sort of thinking I have in mind, when I warn aspiring rationalists to beware of cleverness.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-16T02:15:58.848Z · LW(p) · GW(p)

I think we've already hit the crux of our disagreement and further drilling is pointless. If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it. I think we do not and cannot know the criterion for assessing truth well enough to ignore the problem. I might make this even shorter by saying you take the pragmatist position and I take the skeptical position regarding epistemic circularity (although for myself this elides my agreement that we have to be pragmatic even as we are skeptical if we are to get anything done, and for you likely elides some skepticism you'd like to maintain).

I move us in this direction because I think, for example, trying to respond to

Thinking you're skilled enough to use some "powerful but dangerous" tool is exactly the problem. You will never be skilled enough to deliberately adopt false beliefs without suffering the consequences.

is fruitless right now because your disagreement with what I've said seems to hinge on the sort of relationship we believe we are capable of having with the truth.

Replies from: nshepperd
comment by nshepperd · 2018-10-16T02:36:20.453Z · LW(p) · GW(p)

If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.

That's not my only disagreement. I also think that your specific proposed solution does nothing to "address" the problem (in particular because it just seems like a bad idea, in general because "addressing" it to your satisfaction is impossible), and only serves as an excuse to rationalize holding comforting but wrong beliefs under the guise of doing "advanced philosophy". This is why the “powerful but dangerous tool” rhetoric is wrongheaded. It's not a powerful tool. It doesn't grant any ability to step outside your own head that you didn't have before. It's just a trap.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-16T04:11:49.382Z · LW(p) · GW(p)

It's true that I think the problem of the criterion cannot be resolved, and this forces us to adopt particularism (this is different from pragmatism but compatible with it, see Chisholm's work in this area for more information). I'm not sure what "comforting but wrong beliefs" you think I'm holding on to, though, and to be pointed about it I think believing you can identify the criterion of truth is a "comforting" belief that is either contradictory or demands adopting non-transcendental idealism (a position I think is insufficiently parsimonious to be worth taking).

As for it being "a trap" and granting you no more "ability to step outside your own head that you didn't have before", I'd say this is entirely true of any ontology you construct. That doesn't mean we don't try, but it is the case that we are always stuck in our heads so long as we are trying to understand anything because that's the nature of what it is to understand. You'll likely disagree with me on this point because we disagree on the problem of the criterion, but I'd say the only way to get outside your own head is by turning to the pre-ontological or the ontic through techniques like meditation and epoche.

So alas it sounds as though we are at an impasse as I don't really have the interest or the energy to try to convince you to my side of the question of how to address epistemic circularity given my current understanding of your reasoning. That's not to dismiss you, only that it's beyond what I'm currently up to engaging in. Perhaps another will step into this thread and take up the challenge.

Replies from: nshepperd
comment by nshepperd · 2018-10-16T04:39:01.399Z · LW(p) · GW(p)

If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.

and to be pointed about it I think believing you can identify the criterion of truth is a “comforting” belief that is either contradictory or demands adopting non-transcendental idealism

Actually... I was going to edit my comment to add that I'm not sure that I would agree that I "think we can know truth well enough to avoid the problem of the criterion" either, since your conception of this notion seems to intrinsically require some kind of magic, leading me to believe that you somehow mean something different by this than I would. But I didn't get around to it in time! No matter.

comment by Pattern · 2018-10-15T04:14:14.980Z · LW(p) · GW(p)
a more powerful set of tools

What are these tools, or where can they be found? (And how are they dangerous?)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-15T18:10:57.691Z · LW(p) · GW(p)

I'm going to outsource the answer to this to David Chapman; it's a bit more than I could hope to fit in a response here.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2018-10-15T20:04:01.808Z · LW(p) · GW(p)

It seems to be more than he can fit into the whole of his family of blogs. I've read quite a lot of what he's written, but at last I gave up on his perpetual jam-tomorrow deferral of any plain setting out of his positive ideas.

Replies from: rsaarelm
comment by rsaarelm · 2018-10-15T23:54:26.971Z · LW(p) · GW(p)

I like to link to his recommended reading list instead of the main site for gesturing towards what Chapman seems to be circling around while never quite landing on. It's still not a clear explanation of the thing, but at least that's more than one person's viewpoint on the landscape.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2018-10-16T09:11:34.741Z · LW(p) · GW(p)

Contrast The Sequences, where Eliezer simply wrote down what he knew, with exemplary clarity. No circling around and around, but steadily marching through the partial order of dependencies of topics, making progress with every posting, that built into a whole. That is what exposition should look like. No repeated restarting with ever more fundamental things that would have to be explained first while the goal gets further away. No gesturing towards enormous reading lists as a stopgap for being unable to articulate his ideas about that material. Perhaps that is because he had something real to say?

In my experience, the subjective feeling that one understands an idea, even if it seems to have pin-sharp clarity, often does not survive trying to communicate it to another person, or even to myself by writing it down. The problem may not be that the thing is difficult to convey, but that I am confused about the thing. The thing may not exist, not correspond to anything in reality. (Explaining something to a machine, i.e. programming, is even more exposing of confusion.)

When a thing is apparently so difficult to communicate that however much ink one spills on the task, the end of it does not draw nearer, but recedes ever further, I have to question whether there is anything to be communicated.

In that reading list page, Chapman writes this

I took the title of my book In the Cells of the Eggplant from a dialog in Understanding Computers and Cognition:
A. Is there any water in the refrigerator?
B. Yes.
A. Where? I don't see it.
B. In the cells of the eggplant.
Was “there is water in the refrigerator” true?
That question can only be answered meta-rationally: “True in what sense? Relative to what purpose?”

That is not "meta-rationality", that is plain rationality. Person A is asking about water to drink. The anecdote implies that there is none in the refrigerator, therefore the correct answer is "No." The simple conundrum is dissolved [LW · GW] by the ordinary concept of implicature. Even a child can tell that B's reply is either idiotic or a feeble joke. That Chapman has titled his intended book after this vignette does not give me confidence in his project.

Replies from: Richard_Kennaway, rsaarelm
comment by Richard_Kennaway · 2018-10-16T16:07:51.847Z · LW(p) · GW(p)

I have just recalled an anecdote about the symptoms of trying to explain something incoherent. If (so I read) you hypnotize someone and suggest to them that they can see a square circle drawn on the wall, fully circular and fully a square, they have the experience of seeing a square circle. Now, I'm somewhat sceptical about the reality of hypnosis, but not at all sceptical about the physical ability of a brain to have that experience, despite the fact that there is no such thing as a square circle.

If you ask that person (the story goes on) to draw what they see, they start drawing, but keep on erasing and trying again, frustrated by the fact that what they draw always fails to capture the thing they are trying to draw.

Edit: the story is from Edward de Bono's book "Lateral Thinking: An Introduction" (previously published as "The Use of Lateral Thinking").

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-16T18:04:49.689Z · LW(p) · GW(p)

I think you might be reading a bit too much into things here. Eliezer is exceptional in his verbal communication abilities for whatever his other skills and flaws, so even if the most rationalist rationalist set out to write the sequences and they were not on par with Eliezer's verbal skills they likely would not have been as successful, would have gotten lost and have lots of dangling pointers to things to explain later. Chapman is facing the normal problems of trying to explain a complex things when you're closer to the mean.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-16T19:41:32.863Z · LW(p) · GW(p)

The problem with this reasoning is: if you can’t explain it, just how exactly are you so sure that there is any “it” to explain?

If we’re talking about some alleged practical skill, or some alleged understanding of a physical phenomenon, etc., then this is no obstacle. Can’t explain it in words? Fine; simply demonstrate the skill, or make a correct prediction (or several) based on your understanding, and it’ll be clear at once that you really do have said skill, or really do possess said understanding.

For example, say I claim to know how to make a delicious pie [LW · GW]. You are skeptical, and demand an explanation. I fumble and mumble, and eventually confess that I’m just no good at explaining. But I don’t have to explain! I can just bake you a pie. So I can be perfectly confident that I have pie-baking skills, because I have this pie; and you can be confident in same, for the same reason.

Similar logic applies to alleged understanding of the real world.

But with something like this—some deep philosophical issue—how can you demonstrate to me that you know something I don’t, or understand something I don’t, without explaining it to me? Now, don’t get me wrong; maybe you really do have some knowledge or understanding. Not all that is true, can be easily demonstrated.

But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-16T21:00:29.011Z · LW(p) · GW(p)
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!

I may well have knowledge of things through experience I cannot verbalize well or explain to myself in a systematized way. To suggest it must be that I can only have such things if I can explain them is to assume against the point I'm making in the original post, which you are free to disagree with but I want to make clear I think this is a difference of assumptions not a difference of reasoning from a shared assumption.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-16T21:24:49.291Z · LW(p) · GW(p)

So… you have experience you can’t verbalize or explain, to yourself or to others; and through this experience, you gain knowledge, which you also can’t adequately verbalize or explain, to yourself or to others? Nor can you in any meaningful way demonstrate this knowledge or its fruits?

Once again, I do not claim that this proves that you don’t have any special knowledge or understanding that you claim to have. But it seems clear to me that you have no good reason for believing that you have any such thing—and much less does anyone else have any reason to believe this.

And, with respect, whatever assumption you have made, which would lead you to conclude otherwise… I submit to you that this assumption has taken you beyond the bounds of sanity.

Replies from: romeostevensit
comment by romeostevensit · 2018-10-17T02:33:21.159Z · LW(p) · GW(p)

Improvements to subjective well being can be extremely legible from the inside and fairly noisy from the outside.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-17T02:57:15.978Z · LW(p) · GW(p)

Fair enough, that’s true. To be clear, then—is that all this is about? “Improvements to subjective well being”?

Replies from: Pattern
comment by Pattern · 2018-11-24T16:38:35.230Z · LW(p) · GW(p)

I would imagine it's also about: not having such an explanation right now, but being confident you will have one soon. For an extreme case with high confidence: I see a 'proof' that 1=2. I may be confident that there is at least 1 mistake in the proof before I find it. With less confidence, I may guess that the error is 'dividng by zero' before I see it.

comment by rsaarelm · 2018-10-18T07:11:11.769Z · LW(p) · GW(p)

That's a lot of reiteration of the problem with Chapman's writing which was a reason I pointed to the reading list to begin with. Not trying to pull a "you must read all this before judging Chapman" Gish gallop, but trying to figure out if there's some common strain of what Nietzsche, Heidegger, Wittgenstein, Dreyfus, Hofstadter and Kegan are going on about that looks like what Chapman is trying to go for. Maybe the idea is just really hard, harder than the Sequences stuff, but at least you got several people doing different approaches to it so you have a lot more to work with.

And it might be there isn't and this is all just Chapman flailing about. When someone builds a working AGI with just good and basic common-sense rationality ideas, I'll concede that he probably was. In the meantime, it seems kind of missing the point to criticize an example whose point is that it's obvious to humans as being obvious to humans. I took the whole point of the example that we're still mostly at the level of "dormitive principle" explanations for how humans figure this stuff out, and now we have the AI programming problem that gives us some roadmap for what an actual understanding of this stuff would look like, and suddenly figuring out the eggplant-water thing from first principles isn't that easy anymore. (Of course now we also have the Google trick of having a statistical corpus of a million cases of humans asking for water from the fridge where we can observe them not being handed eggplants, but taking that as the final answer doesn't seem quite satisfactory either.)

The other thing is the Kegan levels and the transition from a rule-following human who's already doing pretty AI complete tasks, but very much thinking inside the box to the system-shifting human. A normal human is just going to say "there are alarm bells ringing, smoke coming down the hallway and lots of people running towards the emergency exits, maybe we should switch from the weekly business review meeting frame to the evacuating the building frame about now", while the business review meeting robot will continue presenting sales charts until it burns to a crisp. The AI engineer is going to ask, "how do you figure out which inputs should cause a frame shift like that and how do you figure out which frame to shift to?" The AI scientists is going to ask, "what's the overall formal meta-framework of designing an intelligent system that can learn to dynamically recognize when its current behavioral frame has been invalidated and to determine the most useful new behavioral frame in this situation?" We don't seem to really have AI architectures like this yet, so maybe we need something more heavy-duty than SEP pages to figure them out.

So that's a part of what I understand Chapman is trying to do. Hofstadter-like stuff, except actually trying to tackle it somehow instead of just going "hey I guess this stuff is a thing and it actually looks kinda hard" like Hofstadter went in GEB. And then the background reading has the fun extra feature that before about the 1970s nobody was framing this stuff in terms of how you're supposed to build an AI, so they'll be coming at it from quite different viewpoints.

Replies from: Elo
comment by Elo · 2018-10-18T07:47:02.495Z · LW(p) · GW(p)

It's not "hard", just remarkably subtle. Frustratingly vague, often best described by demonstrating wordless experiences. I often describe the idea as a cluster not something that can be nailed down. And that's not from lack of trying, it's because it's a purposely slippery concept.

Try to hold a slippery concept and talk about it and it just doesn't convert to words very well.

Replies from: rsaarelm
comment by rsaarelm · 2018-10-18T08:11:10.985Z · LW(p) · GW(p)

That's the way where you try to make another adult human recognize the thing based on their own experiences, which is how we've gone about this since the Axial Age. Since 1970s, the second approach of how would you program an artificial intelligence to do this has been on the table. If we could manage this, it would in theory be a lot more robust statement of the case, but would also probably be much, much harder for humans to actually follow by going through the source code. I'm guessing this is what Chapman is thinking when he specifies "can be printed in a book of less than 10kg and followed consciously" for a system intended for human consumption.

Of course there's also a landscape between the everyday language based simple but potentially confusion engendering descriptions and the full formal specification of a human-equivalent AGI. We do know that either humans work by magic or a formal specification of a human-equivalent AGI exists even when we can't write down the book of probably more than 10 kg containing it yet. So either Chapman's stuff hits somewhere in the landscape between the present-day reasoning writing that piggybacks on existing human cognition capabilities and the Illustrated Complete AGI Specification or it does not, but it seems like the landscape should be there anyway and getting some maps of it could be very useful.

comment by TAG · 2018-10-15T21:13:56.824Z · LW(p) · GW(p)

Substitute other criteria for what? You write as though the problem of obtaining correspondence to ultimate reality were already solved, and only the will to do so is missing.

Replies from: nshepperd
comment by nshepperd · 2018-10-16T00:44:05.858Z · LW(p) · GW(p)

I don't have to solve the problem of induction to look out my window and see whether it is raining. I don't need 100% certainty, a four-nines probability estimate is just fine for me.

Where's the "just go to the window and look" in judging beliefs according to "compellingness-of-story"?

Replies from: TAG, gworley
comment by TAG · 2018-10-16T06:33:56.644Z · LW(p) · GW(p)

I wasn't talking about induction specifically.

Merely observing doesnt solve everything. What about the rainbow you see after the rain has stopped? How many times have people observed the sun without knowing it is a fusion reactor?

Replies from: nshepperd
comment by nshepperd · 2018-10-16T15:15:55.398Z · LW(p) · GW(p)

Indeed, the scientific history of how observation and experiment led to a correct understanding of the phenomenon of rainbows is long and fascinating.

Replies from: TAG
comment by TAG · 2018-10-17T11:00:16.047Z · LW(p) · GW(p)

Which is to say that is a lot more complex than "just look" and also more complex than "come up with a predictive theory". Indeed, no-one has method for obtaining correspondence to reality that works in all cases..

comment by Gordon Seidoh Worley (gworley) · 2018-10-16T02:06:31.699Z · LW(p) · GW(p)

This seems to be completely missing the mark and failing to respond in good faith. I already deleted a couple other comments for this reason, including one of yours nshepperd, but this case is marginal enough that I'll let it slide. Consider yourself warned and I will ban if necessary to maintain productive discussion, which would be unfortunate given your fruitful contributions elsewhere in the comments of this post.

Replies from: nshepperd
comment by nshepperd · 2018-10-16T14:28:04.061Z · LW(p) · GW(p)

I'm sorry, what? In this discussion? That seems like an egregious conflict of interest. You don't get to unilaterally decide that my comments are made in bad faith based on your own interpretation of them. I saw which comment of mine you deleted and honestly I'm baffled by that decision.

Replies from: habryka4, gworley
comment by habryka (habryka4) · 2018-10-16T19:44:02.516Z · LW(p) · GW(p)

The moderation system we settled on [LW · GW] gives people above a certain karma threshold the ability to moderate on their own posts, which I think is very important to allow people to build their own gardens and cultivate ideas. Discussion about that general policy should happen in meta. I will delete any further discussion of moderation policies on this post.

comment by Gordon Seidoh Worley (gworley) · 2018-10-16T18:12:15.952Z · LW(p) · GW(p)

Please see the moderation guidelines. I choose to enforce a particular norm I spell out and I'm the ultimate arbiter of that. If anything I am too generous to people and let them get away with a lot of bullshit before I put a stop to things. This is not to say I never make errors, but if I think you made insufficient effort to respond in a good faith way to advance the conversation, understand the other person, and respond in a way that is not simply reacting in frustration, trying to score points, or otherwise speak to some purpose other than increasing mutual understanding, then your comment will be deleted. If you don't like my garden you can always go talk somewhere else.

comment by Kaj_Sotala · 2018-10-15T12:07:14.305Z · LW(p) · GW(p)

A partial response to your criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined [LW(p) · GW(p)]; so one postrational approach would be to use truth as the primary criterion for forming beliefs, and then use other criteria for filling in the beliefs which the criterion of truth doesn't help us distinguish between.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-15T16:06:39.435Z · LW(p) · GW(p)

I find this view unconvincing, and here’s why.

We can, it seems to me, divide what you say in the linked comment into two parts or aspects.

On the one hand, we have “the predictive processing thing”, as you put it. Well, it’s a lot of interesting speculation, and a potentially interesting perspective on some things. So far, at least, that’s all it is. Using it as any kind of basis for constructing a general epistemology is just about the dictionary definition of “premature”.

One the other hand, we have familiar scenarios like “I will go to the beach this evening”. These are quite commonplace and not at all speculative, so we certainly have to grapple with them.

At first blush, such a scenario seems like a challenge to the “truth as a basis for beliefs” view. Will I go to the beach this evening? Well, as you say—if I believe that I will, then I will, and if I don’t, then I won’t… how can I form an accurate belief, if its truth value is determined by whether I hold it?!

… is what someone might think, on a casual reading of your comment. But that’s not quite what you said, is it? Here’s the relevant bit:

Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.

[emphasis mine]

This seems significant, and yet:

“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”

What is the difference between this, and what you said? Is merely the fact that “I will go to the beach this evening” is about the future, whereas “snow is white” is about the present? Are we saying that the problem is simply that the truth value of “I will go to the beach this evening” is as yet undetermined? Well, perhaps true enough, but then consider this:

“What is the truth value of the belief ‘it will rain this evening’? Well, if it rains this evening, then it is true; if it doesn’t rain this evening, it’s false.”

So this is about the future, and—like the belief about going to the beach—is, in some sense, “underdetermined by external reality” (at least, to the extent that the universe is subjectively non-deterministic). Of course, whether it rains this evening isn’t determined by your actions, but what difference does that make? Is the problem one of underdetermination, or agent-dependency? These are not the same problem!

Let’s return to my first example—“snow is white”—for a moment. Suppose that I hail from a tropical country, and have never seen snow (and have had no access to television, the internet, etc.). Is snow white? I have no idea. Now imagine that I am on a plane, which is taking me from my tropical homeland to, say, Murmansk, Russia. Once again, suppose I say:

“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”

For me (in this hypothetical scenario), there is no difference between this statement, and the one about it raining this evening. In both cases, there is some claim about reality. In both cases, I lack sufficient information to either accept the claim as true or reject it as false. In both cases, I expect that in just a few hours, I will acquire the relevant information (in the former case, my plane will touch down, and I will see snow for the first time, and observe it to be white, or not white; in the latter case, evening will come, and I will observe it raining, or not raining). And—in both cases—the truth of each respective belief will then come to be determined by external reality.

So the mere fact of some beliefs being “about the future” hardly justifies abandoning truth as a singular criterion for belief. As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”. (And, by the way, we have perfectly familiar conceptual tools for dealing with such cases: subjective probability. What is the truth value of the belief “it will rain this evening”? But why have such beliefs? On Less Wrong, of all places, surely we know that it’s more proper to have beliefs that are more like “P(it will rain) = 0.25, P(it won’t rain) = 0.75”?)

So let’s set the underdetermination point aside. Might the question of agent-dependency trouble us more, and give us reason to question the solidity of truth as a basis for belief? Is there something significant to the fact that the truth value of the belief “I will go to the beach this evening” depends on my actions?

There is at least one (perhaps trivial) sense in which the answer is a firm “no”. So what if my actions determine whether this particular belief is true? My actions are part of reality, just like snow, just like rain. What makes them special?

Well—you might say—what makes my actions special is that they depend on my decisions, which depend (somehow) on my beliefs. If I come to believe that I will go to the beach, then this either is identical to, or unavoidably causes, my deciding to go to the beach; and deciding to go to the beach causes me to take the action of going to the beach. Thus my belief determines its own truth! Obviously it can’t be determined by its truth, in that case—that would be hopelessly circular!

Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making. For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom. Note that reversing that priority makes the circularity go away, leaving us with a naturalistic account of agent-dependent beliefs; free-will concerns remain, but those are not epistemological in nature.

And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live in [LW · GW]. If we take this view, then we are simply done: we have brought “I will go to the beach this evening” in line with “it will rain this evening”, which we have already seen to be no different from “snow is white”. All are simply beliefs about reality. As you gain more information about reality, each of these beliefs might be revealed to be true, or not true.

Very well, but suppose an account (like shminux’s) that leaves no room at all for decision-making is too radical for us to stomach. Suppose we reject it. Is there, then, something special about agent-dependent beliefs?

Let us consider again the belief that “I will go to the beach this evening”. Suppose I come to hold this belief (which, depending on which parts of the above logic we find convincing, either brings about, or is the result of, my decision to go to the beach this evening.) But suppose that this afternoon, a tsunami washes away all the sand, and the beach is closed. Now my earlier belief has turned out to be false—through no actions or decisions on my part!

“Nitpicking!”, you say. Of course unforeseen situations might change my plans. Anyway, what you really meant was something like “I will attempt to go to the beach this evening”. Surely, an agent’s attempt to take some action can fail; there is nothing significant about that!

But suppose that this afternoon, I come down with a cold. I no longer have any interest in beachgoing. Once again, my earlier belief has turned out to be false.

More nitpicking! What you really meant was “I will intend to go to the beach this evening, unless, of course, something happens that causes me to alter my plans.”

But suppose that evening comes, and I find that I just don’t feel like going to the beach, and I don’t. Nothing has happened to cause me to alter my plans, I just… don’t feel like it.

Bah! What you really meant was “I intend to go to the beach, and I will still intend it this evening, unless of course I don’t, for some reason, because surely I’m allowed to change my mind?”

But suppose that evening comes, and I find that not only do I not feel like going to the beach, I never really wanted to go to the beach in the first place. I thought I did, but now I realize I didn’t.

In summary:

There is nothing special about agent-dependent beliefs. They can turn out to be true. They can turn out to be false. That is all.

Conflating beliefs with intentions, decisions, or actions, is a mistake as unfortunate as it is elementary.

And forgetting about probability is, probably, most unfortunate of all.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-10-15T19:25:13.058Z · LW(p) · GW(p)

I agree that whether or not the belief is about something that happens in the future is irrelevant (at least if we're talking about physics-time; ScottG's original post was specific to say that it was about logical time). I think that I also agree that shinmux's view is a consistent way of looking at this. But as you say, if you did adopt that view, then we can't really talk about how to make decisions in the first place, and it would be nice if we could. (Hmm, are we rejecting a true view because it's not useful, in favor of trying to find a view which would be no less true but which would be more useful? That's a postrationalist move right there...)

So that leaves us with your objection to the view where we do try to maintain decisions, and find agent-dependent beliefs problematic. I'm not sure I understand your objection there, however. At least to some extent you seem to be pointing at external circumstances which might affect our decision, but my original comment already noted that external circumstances do also play a role rather than the agent's decision being the sole determinant.

I'm also curious about whether you disagree with the original post [LW · GW] where my comment was posted, and ScottG's argument that "the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them", and that this renders standard Bayesian probability inapplicable. If you disagree with that, then it might be better to have this conversation in the comments of that post, where ScottG might chime in.

Replies from: SaidAchmiz, SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-16T05:30:30.289Z · LW(p) · GW(p)

So that leaves us with your objection to the view where we do try to maintain decisions, and find agent-dependent beliefs problematic. I’m not sure I understand your objection there, however. At least to some extent you seem to be pointing at external circumstances which might affect our decision, but my original comment already noted that external circumstances do also play a role rather than the agent’s decision being the sole determinant.

I… don’t know that I can explain my point any better than I already have.

Perhaps I should note that there’s a sense in which “beliefs determine our actions” which I find to be true but uninteresting (at least in this context). This is the utterly banal sense of “if I believe that it is raining outside, then I will bring an umbrella when I go for a walk”—i.e., the sense in which all of our actions are, in one way or another, determined by our beliefs.

Of course, there is nothing epistemologically challenging about this; it is just the ordinary, default state of affairs. You said:

An important framing here is “your beliefs determine your actions, so how do you get the beliefs which cause the best actions”.

If the result of thinking like this is that you decide to adopt false beliefs in order for “better actions” to (allegedly) result than if you had only true beliefs, then this is foolishness; but there is no epistemological challenge here—no difficulty for the project of epistemic rationality. Beyond that, nshepperd’s comments elsethread have dealt with this aspect of the matter, and I have little to add.

The (alleged) difficulty lies with beliefs which not just (allegedly) determine our decisions, but whose truth value is, in turn, determined by our decisions. (For example, “I will go to the beach this evening”.)

But I have shown how there is not, in fact, any difficulty with those beliefs after all.

You said:

A partial response to [nshepperd’s] criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined;

But I have shown how this is not the case.

Thus your response to nshepperd’s criticisms, it seems, turns out to be invalid.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-10-17T08:36:31.084Z · LW(p) · GW(p)
but there is no epistemological challenge here—no difficulty for the project of epistemic rationality. Beyond that, nshepperd’s comments elsethread have dealt with this aspect of the matter, and I have little to add.
The (alleged) difficulty lies with beliefs which not just (allegedly) determine our decisions, but whose truth value is, in turn, determined by our decisions. (For example, “I will go to the beach this evening”.)
But I have shown how there is not, in fact, any difficulty with those beliefs after all.

Hmm. You repeatedly use the word "difficulty". Are you interpreting me to be saying that this would pose some kind of an insurmountable challenge for standard epistemic rationality? I was trying to say the opposite; that unlike what nshepperd was suggesting, this is perfectly in line and compatible with standard epistemic rationality.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-17T17:35:57.426Z · LW(p) · GW(p)

I am referring to this bit [LW(p) · GW(p)]:

But the more general point of the original post is that there are a wide variety of beliefs which are underdetermined by external reality.

And, of course, this bit [LW(p) · GW(p)]:

A partial response to your criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined [LW(p) · GW(p)]; so one postrational approach would be to use truth as the primary criterion for forming beliefs, and then use other criteria for filling in the beliefs which the criterion of truth doesn’t help us distinguish between.

And I am saying that this position is wrong. I am saying that there is no special underdetermination. I am saying that there is no problem with using truth as the only criterion for beliefs. I am saying, therefore, that there is no reason to use any other criteria for belief, and that “postrationality” as you describe it is unmotivated.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-10-18T06:20:50.544Z · LW(p) · GW(p)

Okay. I should probably give a more concrete example of what I mean.

First, here's a totally ordinary situation that has nothing to do with postrationality: me deciding whether I want to have a Pepsi or a Coke. From my point of view as the decision-maker, there's no epistemically correct or incorrect action; it's up to my preferences to choose one or the other. From the point of view of instrumental rationality, there is of course a correct action: whichever drink best satisfies my preferences at that moment. But epistemic rationality does not tell me which one I should choose; that's in the domain of instrumental rationality.

My claim is that there are analogous situations where the decision you are making is "what should I believe", where epistemic rationality does not offer an opinion one way or the other; the only criteria comes from instrumental rationality, where it would be irrational not to choose the one that best fulfilled your preferences.

As an example of such a situation, say that you are about to give a talk to some audience. Let's assume that you are basically well prepared, facing a non-hostile audience etc., so there is no external reason for why this talk would need to go badly. The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go:

  • If you believe that this talk will go badly, then you will be nervous and stammer, and this talk will go badly.
  • If you believe that this talk will go well, then you will be confident and focused on your message, and the talk will go well.

Suppose for the sake of example that you could just choose which belief you have, and also that you know what effects this will have. In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid. If you choose to believe that the talk will go badly, then it will go badly, so the belief is epistemically valid; if you choose to believe that it will go well, then it will go well, so that belief is epistemically valid as well. The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-10-18T06:48:59.469Z · LW(p) · GW(p)

From my point of view as the decision-maker, there’s no epistemically correct or incorrect action

I’m not sure I know what an “epistemically correct action” or an “epistemically incorrect action” is. Actions aren’t the kinds of things that can be “epistemically correct” or “epistemically incorrect”. This would seem to be a type error.

But epistemic rationality does not tell me which one I should choose; that’s in the domain of instrumental rationality.

Indeed…

My claim is that there are analogous situations where the decision you are making is “what should I believe”, where epistemic rationality does not offer an opinion one way or the other;

The epistemically correct belief is “the belief which is true” (or, of course, given uncertainty: “the belief which is most accurate, given available information”). This is always the case.

The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go

The correct belief, obviously, is:

“How the talk will go depends on how confident I am. If I am confident, then the talk will go well. If I am not confident, it will go badly.”

(Well, actually more like: “If I am confident, then the talk is more likely than not to go well. If I am not confident, it is is more likely than not to go badly.”)

Conditionalizing, I can then plug in my estimate of the probability that I will be confident.

If I am able to affect this probability—such as by deciding to be confident (if I have this ability), or by taking some other action (such as taking anxiolytic medication, imagining the audience naked, doing some exercise beforehand, etc.)—then, of course, I will do that.

I will then—if I feel like doing so—revise my probability estimate of my confidence, and, correspondingly, my probability estimate of the talk going well. Of course, this is not actually necessary, since it does not affect anything one way or the other.

Suppose for the sake of example that you could just choose which belief you have

As always, I would choose to have the most accurate belief, of course, as described above.

In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid.

No, choosing to have any but the most accurate belief is epistemically incorrect.

The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?

Indeed not; our criterion is, as always, the one we get is the one from epistemic rationality, i.e. “have the most accurate beliefs”.

comment by Said Achmiz (SaidAchmiz) · 2018-10-16T05:32:32.288Z · LW(p) · GW(p)

I’m also curious about whether you disagree with the original post where my comment was posted, and ScottG’s argument that “the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them”, and that this renders standard Bayesian probability inapplicable. If you disagree with that, then it might be better to have this conversation in the comments of that post, where ScottG might chime in.

I’m afraid I found that post incomprehensible.

comment by Said Achmiz (SaidAchmiz) · 2018-10-15T19:33:43.996Z · LW(p) · GW(p)

I will have more to say later, but:

But as you say, if you did adopt that view, then we can’t really talk about how to make decisions in the first place, and it would be nice if we could. (Hmm, are we rejecting a true view because it’s not useful, in favor of trying to find a view which would be no less true but which would be more useful? That’s a postrationalist move right there...)

I did not say that.

I said nothing about “it would be nice if we could”, nor did I suggest that I both accept shminux’s view as true, and am willing to reject it due to the aforesaid “it would be nice…” consideration.

Please don’t put words in my mouth to make it seem like I actually already agree with you; that is extremely annoying.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-10-15T19:43:27.937Z · LW(p) · GW(p)

Apologies, I misinterpreted you then.

comment by alkexr · 2018-10-15T10:00:31.797Z · LW(p) · GW(p)
However, if this is really what 'postrationality' is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.

It feels like calling someone's philosophy poisonous and harmful doesn't advance the conversation, regardless of its truth value, and this proves the point of the main post well.

Replies from: nshepperd, Richard_Kennaway
comment by nshepperd · 2018-10-15T17:12:14.866Z · LW(p) · GW(p)

Two points:

  1. Advancing the conversation is not the only reason I would write such a thing, but actually it serves a different purpose: protecting other readers of this site from forming a false belief that there's some kind of consensus here that this philosophy is not poisonous and harmful. Now the reader is aware that there is at least debate on the topic.

  2. It doesn't prove the OP's point at all. The OP was about beliefs (and "making sense of the world"). But I can have the belief "postrationality is poisonous and harmful" without having to post a comment saying so, therefore whether such a comment would advance the conversation need not enter into forming that belief, and is in fact entirely irrelevant.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-15T18:14:11.814Z · LW(p) · GW(p)

Yep. In isolation I would be unhappy about this sentence, but given the context I think it's advancing the conversation by expressing a viewpoint about what has been said so we can discuss how the ideas presented are perceived.

comment by Richard_Kennaway · 2018-10-16T09:17:31.094Z · LW(p) · GW(p)

If a philosophy is poisonous and harmful, I think it commendable and necessary to say so.

comment by alkexr · 2018-10-15T10:02:33.784Z · LW(p) · GW(p)

Question: how does postrationality and instrumental rationality relate to each other? To me it appears that you are simply arguing for instrumental rationality over epistemic rationality, or am I missing something?

Replies from: Elo, TAG
comment by Elo · 2018-10-15T11:24:54.394Z · LW(p) · GW(p)

Instrumental rationality informs epistemics which then informs instrumental. Which thrn informs epistemics.

And like this all the way down.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-10-15T17:43:42.513Z · LW(p) · GW(p)

Just to put this in different words, I'd say the idea of a separation between the instrumental and the epistemic (as those terms are used in this context) is a confusion because the two are literally confused (mixed together). I think what you are seeing is perhaps the result of postrationality treating them as unified rather than trying to treat them as separate (even if they inform each other).

Replies from: Elo
comment by Elo · 2018-10-15T21:52:13.626Z · LW(p) · GW(p)

Yes

comment by TAG · 2018-10-15T21:03:48.632Z · LW(p) · GW(p)

Not speaking for gworley, but I but instrument rationality can be more attainable than epistemic rationality even if it is less valuable.