G Gordon Worley III's Shortform

post by G Gordon Worley III (gworley) · 2019-08-06T20:10:27.796Z · score: 16 (2 votes) · LW · GW · 63 comments

63 comments

Comments sorted by top scores.

comment by G Gordon Worley III (gworley) · 2019-08-21T12:54:12.964Z · score: 26 (11 votes) · LW · GW

Some thoughts on Buddhist epistemology.

This risks being threatening, upsetting, and heretical within a certain point of view I commonly see expressed on LW for reasons that will become clear if you keep reading. I don't know if that means you shouldn't read this if that sounds like the kind of thing you don't want to read, but I put it out there so you can make the choice without having to engage in the specifics if you don't want to. I don't think you will be missing out on anything if that warning gives you a tinge of "maybe I won't like reading this".

My mind produces a type error when people try to perform deep and precise epistemic analysis of the dharma. That is, when they try to evaluate the truth of claims made by the dharma this seems generally fine, but when they go deep enough that they end up trying to evaluate whether the dharma itself is based on something true, I get the type error.

I'm not sure what people trying to do this turn up. My expectation is that their results looks like noise if you aggregate over all such attempts. The reason being that the dharma is not founded on episteme.

As a quick reminder, there are at least three categories of knowledge worth considering: doxa, episteme, and gnosis. Doxa might translate as "hearsay" in English; it's about statements of the truth. Episteme is knowledge you come to believe via evaluation of the truth. Gnosis is direct, unmediated-by-ontology knowledge of reality. To this I'll also distinguish techne from episteme, the former being experienced knowledge and the latter being reasoned knowledge.

I'll make the probably not very bold claim that most LW rationalists value episteme above all else, accept techne as evidence, accept doxa as evidence about evidence and only weak evidence of truth itself, and mostly ignore gnosis because it is not "rational" in the sense that it cannot be put into words and it can only be pointed at by words and so cannot be analyzed because there is no ontology or categorization to allow making claims one way or the other about it.

Buddhist philosophy values gnosis above all else, then techne, then doxa, then episteme.

To say a little more, the most important thing in Buddhist thinking is seeing reality just as it is, unmediated by the "thinking" mind, by which we really mean the acts of discrimination, judgement, categorization, and ontology. To be sure, this "reality" is not external reality, which we never get to see directly, but rather our unmediated contact with it via the senses. But for all the value of gnosis, unless you plan to sit on a lotus flower in perfect equanimity forever and never act in the world, it's not enough. Techne is the knowledge we gain through action in the world, and although it does pass judgement and discriminate, it also stays close to the ground and makes few claims. It is deeply embodied in action itself.

I'd say doxa comes next because there is a tradition of passing on the words of enlightened people as they said them and acting, at least some of the time, as if they were 100% true. Don't confuse this for just letting anything in, though: the point is to trust in the words of those who have come before and seen more than you and doing that is often very helpful to learning to see that which was previously invisible for yourself, but it is always an action you do yourself not contingent on the teachings since those only pointed you towards where to look and always failed to put into words (because it was impossible) what you would find. The old story was that the Buddha, when asked why he should be believed, said don't: try it for yourself and see what you find.

Episteme is last, and that's because it's not to be trusted. Of all the ways of knowing, episteme is the least grounded in reality. This should not be surprising, but it might be, so I'll say a bit about it. Formal methods are not grounded. There's a reason the grounding problem, epistemic circularity, the problem of the criterion, the problem of finding the universal prior, etc. remain fundamentally unsolved: they are unsolvable in a complete and adequate way. Instead we get pragmatic solutions that cross the chasm between reality and belief, between noumena and phenomena, between the ontic and ontology, and this leap of faith means episteme is always contingent on that leap. Even as it proves things we verify by other means, we must be careful because it's not grounded and we have to check everything it produces by other means. This means going all the way down to gnosis if possible, and techne at the least.

None of this it to say that episteme is not useful for many things and making predictions, but we hold it at arms length because of its powerful ability to confuse us if we didn't happen to make the right leaps where we pragmatically had to. It also always leaves something out because it requires distinctions to function, so it is always less complete. At the same time, it often makes predictions that turn out to be true, and the world is better for our powerful application of it. We just have to keep in mind what it is and what it can do and what its dangers are and engage with it in a thoughtful, careful way to avoid getting lost and confusing our perception of reality for reality itself.

So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. It's grounded on gnosis and techne, presented via doxa, and only after the fact might we try to extend it via episteme to get an idea of where to look to understand it better. Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead.

So where does this leave us? If you want to evaluate the dharma, you'll have to do it yourself. You can't argue about it or reason it, you have to sit down and look at the nature of reality without conceptualizing it. Maybe that means you won't engage with it since it doesn't choose to accept the framing of episteme. That seems fine if you are so inclined. But then don't be surprised if the dharma claims you are closed minded, if you feel like it attacks your identity, and if it feels just true enough that you can't easily dismiss it out of hand although you might like to.

comment by Kaj_Sotala · 2019-08-21T18:38:08.438Z · score: 21 (8 votes) · LW · GW
So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. [...] Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead.

In my view, this seems like a clear failing. The fact that the dharma comes from a tradition where this has usually been the case is not an excuse for not trying to fix it.

Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.

This is not just a question of "the dharma has to be able to justify itself"; it's also a question of leaving out the episteme component leaves the system impoverished, as noted e.g. here:

Recurrent training to attend to the sensate experience moment-by-moment can undermine the capacity to make meaning of experience. (The psychoanalyst Wilfred Bion described this as an ‘attack on linking’, that is, on the meaning-making function of the mind.) When I ask these patients how they are feeling, or what they are thinking, or what’s on their mind, they tend to answer in terms of their sensate experience, which makes it difficult for them to engage in a transformative process of psychological self-understanding.

and here:

In important ways, it is not possible to encounter our unconscious – at least in the sense implied by this perspective – through moment-to-moment awareness of our sensate experience. Yes, in meditation we can have the experience of our thoughts bubbling just beneath the surface – what Shinzen Young calls the brain’s pre-processing – but this is not the unconscious that I’m referring to, it, or at least not all of it.
Let me give an example. Suppose that I have just learned that a close friend has died. I’m deeply saddened by this news. Moments later, I spill a cup of coffee on my new pants and become quite angry. Let’s further suppose that, throughout my life, I’ve had difficulty feeling sadness. For reasons related to my personal history, sadness frightens me. In my moment of anger, if I adopt the perspective of awareness of sensate experience, moment-by-moment, then I will have no access to the fact that I am sad. On the contrary, my sensate experience seems to reflect the fact that I am angry. But given what I know about myself, it’s quite reasonable to posit that my anger is a defense against the feeling of sadness, a feeling of which I am unconscious as I am caught up in my anger.
comment by G Gordon Worley III (gworley) · 2019-08-21T19:31:50.784Z · score: 9 (4 votes) · LW · GW

Hmm, I feel like there's multiple things going on here, but I think it hinges on this:

Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.

Different traditions vary on how much to emphasize models and episteme. None of them completely ignore it, though, only seek to keep it within its proper place. It's not that episteme is useless, only that it is not primary. You of course should include it because it's part of the world, and to deny it would lead to confusion and suffering. As you note with your first example especially, some people learn to turn off the discriminating mind rather than hold it as object, and they are worse for it because then they can't engage with it anymore. Turning it off is only something you could safely do if you really had become so enlightened that you had no shadow and would never accumulate any additional shadow, and even then it seems strange from where I stand to do that although maybe it would make sense to me if I were in the position that it were a reasonable and safe option.

So to me this reads like an objection to a position I didn't mean to take. I mean to say episteme has a place and is useful, it is not taken as primary to understanding, at some points Buddhist episteme will say contradictory things, that's fine and expected because dharma episteme is normally post hoc rather than ante hoc (though is still expected to be rational right up until it is forced to hit a contradiction), and ante hoc is okay so long as it is then later verified via gnosis or techne.

comment by romeostevensit · 2019-09-11T15:42:32.267Z · score: 14 (5 votes) · LW · GW

>unmediated-by-ontology knowledge of reality.

I think this is a confused concept, related to wrong-way-reduction.

comment by G Gordon Worley III (gworley) · 2019-09-21T22:10:28.798Z · score: 2 (1 votes) · LW · GW

I've thought about this a bit and I don't see a way through to what you are thinking that makes you suggest this since I don't see a reduction happening here, much less one moving towards bundling together confusion that only looks simpler. Can you say a bit more that might make your perspective on this clearer?

comment by romeostevensit · 2019-09-25T00:34:58.925Z · score: 4 (2 votes) · LW · GW

In particular, I think under this formulation knowledge and onotology largely refer to the same thing. Which is part of the reason I think this formulation is mistaken. Separately, I think 'reality' has too many moving parts to be useful for the role it's being used for here.

comment by G Gordon Worley III (gworley) · 2019-09-26T17:21:14.960Z · score: 2 (1 votes) · LW · GW

Maybe, although I think there is a not very clear distinction I'm trying to make between knowledge and ontological knowledge, though maybe it's not coming across, although if it is and you have some particular argument for why, say, there isn't or can't be such a meaningful distinction, I'd be interested to hear it.

As for my model of reality having too many moving parts, you're right, I'm not totally unconfused about everything yet, and it's the place the remaining confusion lives.

comment by Chris_Leong · 2019-08-22T00:05:02.729Z · score: 4 (3 votes) · LW · GW

I agree with KaJ Solata and Viliam that episteme is underweighted in Buddhism, but thanks for explicating that world view

comment by Viliam · 2019-08-21T22:57:21.901Z · score: 4 (2 votes) · LW · GW
the most important thing in Buddhist thinking is seeing reality just as it is, unmediated by the "thinking" mind, by which we really mean the acts of discrimination, judgement, categorization, and ontology. To be sure, this "reality" is not external reality, which we never get to see directly, but rather our unmediated contact with it via the senses.

The "unmediated contact via the senses" can only give you sensual inputs. Everything else contains interpretation. That means, you can only have "gnosis" about things like [red], [warm], etc. Including a lot of interesting stuff about your inner state, of course, but still fundamentally of the type [feeling this], [thinking that], and perhaps some usually-unknown-to-non-Buddhists [X-ing Y], etc.

Poetically speaking, these are the "atoms of experience". (Some people would probably say "qualia".) But some interpretation needs to come to build molecules out of these atoms. Without interpretation, you could barely distinguish between a cat and a warm pillow... which IMHO is a bit insufficient for a supposedly supreme knowledge.

comment by hamnox · 2019-09-25T02:31:11.260Z · score: 1 (1 votes) · LW · GW

I am glad for having read this, but can't formulate my thoughts super clearly. Just have this vague sense that you're using too many groundless words and not connecting to the few threads of gnosis(?) that other rationalists would have available.

comment by G Gordon Worley III (gworley) · 2019-09-09T00:49:00.360Z · score: 21 (11 votes) · LW · GW

If an organism is a thing that organizes, then a thing that optimizes is an optimism.

comment by G Gordon Worley III (gworley) · 2019-08-06T20:10:28.995Z · score: 13 (7 votes) · LW · GW

I have plans to write this up more fully as a longer post explaining the broader ideas with visuals, but I thought I would highlight one that is pretty interesting and try out the new shortform feature at the same time! As such, this is not optimized for readability, has no links, and I don't try to backup my claims. You've been warned!

Suppose you frequently found yourself identifying with and feeling like you were a homunculus controlling your body and mind: there's a real you buried inside, and it's in the driver's seat. Sometimes your mind and body do what "you" want, sometimes it doesn't and this is frustrating. Plenty of folks reify this in slightly different ways: rider and elephant, monkey and machine, prisoner in cave (or audience member in theater), and, to a certain extent, variations on the S1/S2 model. In fact, I would propose this is a kind of dual process theory of mind that has you identifying with one of the processes.

A few claims.

First, this is a kind of constant, low-level dissociation. It's not the kind of high-intensity dissociation we often think of when we use that term, but it's still a separation of sense of self from the physical embodiment of self.

Second, this is projection, and thus a psychological problem in need of resolving. There's nothing good about thinking of yourself this way; it's a confusion that may be temporarily helpful but it's also something you need to learn to move beyond via first reintegrating the separated sense of self and mind/body.

Third, people drawn to the rationalist community are unusually likely to be the sort of folks who dissociate and identify with the homunculus, S2, the rider, far mode, or whatever you want to call it. It gives them a world view that says "ah, yes, I know what's right, but for some reason by stupid brain doesn't do what I want, so let's learn how to make it do what I want" when this is in fact a confusion because it's the very brain that's "stupid" that's producing the feeling that you think you know what you want!

To speculate a bit, this might help explain some of the rationalist/meta-rationalist divide: rationalists are still dissociating, meta-rationalists have already reintegrated, and as a result we care about very different things and look at the world differently because of it. That's very speculative, though, and I have nothing other than weak evidence to back it up.

comment by G Gordon Worley III (gworley) · 2019-09-07T23:24:45.818Z · score: 12 (8 votes) · LW · GW

I think it's safe to say that many LW readers don't feel like spirituality [LW · GW]is a big part of their life, yet many (probably most) people do experience a thing that goes by many names---the inner light, Buddha-nature, shunyata, God [LW · GW]---and falls under the heading of "spirituality". If you're not sure what I'm talking about, I'm pointing to a common human experience you aren't having.

Only, I don't think you're not having it, you just don't realize you are having those experiences.

One way some people get in touch with this thing, which I like to think of as "the source" and "naturalness" and might describe as the silently illuminated wellspring, is with drugs, especially psychedelics but really any drug that gets you to either reduce activity of the default-mode network or at least notice it's operation and stop identifying with it (dissociatives may function like this). In this light, I think of drug users as very spiritual people, only they are unfortunately doing it in a way that is often destructive to their bodies and causes headlessness (causes them to fail to perceive reality accurately and so may act out of confusion and ignorance, leading to greater suffering).

Another way some people manage to get in touch with the source is through exercise. They exercise hard enough that their body gives up devoting enough energy to the brain that the default-mode network shuts down and then they get a "high".

Another way I think many people touch the source is through nostalgia. My theory is that it works like this: when we are young the conceptualizing mind is weak and we see reality as it is more clearly even if it's in a way that is very ignorant of causality; then we get older and understand causality better via stronger models and more conceptualization and discernment but at the cost of less seeing reality directly and more seeing it through our maps; nostalgia is then a feeling of longing to get back to the source, to get back to seeing reality directly the way we did when we were younger.

There are many other ways people get in touch with ultimate reality to gain gnosis of it. The ones I've just described are all "inferior" in the sense that they partially get back to the source in a fragmented way that lets you get only a piece of it. There are "superior" methods, though, maybe the purest (in the sense of having the least extra stuff and the most clear, direct access) of which I consider to be meditation. But however you do it, the human experience of spirituality is all around you and always available, only we've failed to notice the many ways we get at it in the modern, secular world by denying the spirituality of these experiences.

comment by Duncan_Sabien · 2019-09-11T01:21:09.900Z · score: 19 (7 votes) · LW · GW
Only, I don't think you're not having it, you just don't realize you are having those experiences.

The mentality that lies behind a statement like that seems to me to be pretty dangerous. This is isomorphic to "I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest."

Sometimes that's *true.* Let's not forget that. Sometimes you *are* the most perceptive one in the room.

But I think it's a good and common standard to be skeptical of (and even hostile toward) such claims (because such claims routinely lead to unjustified and not-backed-by-reality dismissal and belittlement and marginalization of the "blind" by the "seer"), unless they come along with concrete justification:

  • Here are the observations that led me to claim that all people do in fact experience X, in direct contradiction of individuals claiming otherwise; here's why I think I'm correct to ignore/erase those people's experience.
  • Here are my causal explanations of why and how people would become blindspotted on X, so that it's not just a blanket assertion and so that people can attempt to falsify my model.
  • Here are my cruxes surrounding X; here's what would cause me to update that I was incorrect in the conclusions I was reaching about what's going on in other people's heads

... etc.

https://slatestarcodex.com/2017/10/02/different-worlds/

comment by Ben Pace (Benito) · 2019-09-11T01:42:23.383Z · score: 9 (2 votes) · LW · GW

Yeah, I think there's a subtle distinction. While it's often correct to believe things that you have a hard time communicating explicitly (e.g. most of my actual world model at any given time), the claim that there's something definitely true but that in-principle I can't persuade you of and also can't explain to you, especially when used by a group of people to coordinate around resources, is often functioning as a coordination flag and not as a description of reality.

comment by Raemon · 2019-09-11T21:31:30.069Z · score: 7 (3 votes) · LW · GW

Just wanted to note that I am thinking about this exchange, hope to chime in at some point. I'm not sure whether I'm on the same page as Ben about it. May take a couple days to have time to respond in full.

comment by Raemon · 2019-09-13T03:26:15.697Z · score: 5 (2 votes) · LW · GW

Just a quick update: the mod team just chatted a bunch about this thread. There’s a few different things going on.

It’ll probably be another day before a mod follows up here.

comment by Ben Pace (Benito) · 2019-09-16T23:23:16.063Z · score: 11 (5 votes) · LW · GW

[Mod note] I thought for a while about how shortform interacts with moderation here. When Ray initially wrote the shortform announcement post [LW · GW], he described the features, goals, and advice for using it, but didn’t mention moderation. Let me follow-up by saying: You’re welcome and encouraged to enforce whatever moderation guidelines you choose to set on shortform, using tools like comment removal, user bans, and such. As a reminder, see the FAQ section on moderation [LW · GW] for instructions on how to use the mod tools. Do whatever you want to help you think your thoughts here in shortform and feel comfortable doing so.

Some background thoughts on this: In other places on the internet, being blocked locks you out of the communal conversation, but there are two factors that make it pretty different here. Firstly, banning someone from a post on LW means they can’t reply to the content they’re banned from, but it doesn’t hide your content from them or their content from you. And secondly, everyone here on LessWrong has a common frontpage where the main conversation happens - the shortform is a low-key place and a relatively unimportant part of the conversation. (You can be banned from posts on frontpage, but that action requires meeting high standards not required for shortform bans.) Relatively speaking, shortform is very low-key, and I expect the median post gets 3x-10x fewer views than the median frontpage post. It’s a place for more casual conversation, hopefully leading to the best ideas getting made into posts - indeed we’re working on adding an option to turn shortform posts into blogposts. This is why we never frontpage a user’s shortform feed - they rarely meet frontpage standards, and they’re not supposed to.

Just to mention this thread in particular, Gordon is well within his rights to ban users or remove their comments from his shortform posts if he wishes to, and the LW mod team will back him up when he wants to do that.

comment by G Gordon Worley III (gworley) · 2019-09-11T02:31:31.670Z · score: 6 (6 votes) · LW · GW

Sure, this is short form. I'm not trying very hard to make a complete argument to defend my thoughts, just putting them out there. There is no norm that I need always abide everywhere to present the best (for some notion of best) version of my reasons for things I claim, least of all, I think, in this space as opposed to, say, in a frontpage post. Thus it feels to me a bit out of place to object in this way here, sort of like objecting that my fridge poetry is not very good or my shower singing is off key.

Now, your point is well taken, but I also generally choose to simply not be willing to cross more than a small amount of inferential distance in my writing (mostly because I think slowly and it requires significant time and effort for me to chain back far enough to be clear to successively wider audiences), since I often think of it as leaving breadcrumbs for those who might be nearby rather than leading people a long way towards a conclusion. I trust people to think things through for themselves and agree with me or not as their reason dictates.

Yes, this means I am often quite distanced from easily verifying the most complex models I have, but such seems to be the nature of complex models that I don't even have complete in my own mind yet, much less complete in a way that I would lay them out precisely such that they could be precisely verified point by point. This perhaps makes me frustratingly inscrutable about my most exciting claims to those with the least similar priors, but I view it as a tradeoff for aiming to better explain more of the world to myself and those much like me at the expense of failing to make those models legible enough for those insufficiently similar to me to verify them.

Maybe my circumstances will change enough that one day I'll make a much different tradeoff?

comment by Duncan_Sabien · 2019-09-11T04:01:38.675Z · score: 2 (3 votes) · LW · GW

This response missed my crux.

What I'm objecting to isn't the shortform, but the fundamental presumptuousness inherent in declaring that you know better than everyone else what they're experiencing, *particularly* in the context of spirituality, where you self-describe as more advanced than most people.

To take a group of people (LWers) who largely say "nah, that stuff you're on is sketchy and fake" and say "aha, actually, I secretly know that you're in my domain of expertise and don't even know it!" is a recipe for all sorts of bad stuff. Like, "not only am I *not* on some sketchy fake stuff, I'm actually superior to my naysayers by very virtue of the fact that they don't recognize what I'm pointing at! Their very objection is evidence that I see more clearly than they do!"

I'm pouring a lot into your words, but the point isn't that your words carried all that so much as that they COULD carry all that, in a motte-and-bailey sort of way. The way you're saying stuff opens the door to abuse, both social and epistemic. My objection wasn't actually a call for you to give more explanation. It was me saying "cut it out," while at the same time acknowledging that one COULD, in principle, make the same claim in a justified fashion, if they cared to.

comment by G Gordon Worley III (gworley) · 2019-09-11T06:25:22.932Z · score: 5 (7 votes) · LW · GW

Note: what follows responds literally to what you said. I'm suspicious enough that my interpretation is correct that I'll respond based on it, but I'm open to the possibility this was meant more metaphorically and I've misunderstood your intention.

It was me saying "cut it out,"

Ah, but that's not up to you, at least not here. You are welcome to dislike what I say, claim or argue that I am dangerous in some way, downvote me, flag my posts, etc. BUT it's not up to you to enforce a norm here to the best of my knowledge, even if it's what you would like to do.

Sorry if that is uncharacteristically harsh and direct of me, but if that was your motivation, I think it important to say I don't recognize you as having the authority to do that in this space, consider it a violation of my commenting guidelines, and will delete future comments that attempt to do the same.

comment by Ben Pace (Benito) · 2019-09-11T08:35:46.597Z · score: 11 (3 votes) · LW · GW

Hey Gordon, let me see if I understand your model of this thread. I’ll write mine and can you tell me if it matches your understanding?

  • You write a post giving your rough understanding of a commonly discussed topic that many are confused by
  • Duncan objects to a framing sentence that he claims means “I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest." because it seems inappropriate and dangerous in this domain (spirituality)
  • You say “Dude, I’m just getting some quick thoughts off my chest, and it’s hard to explain everything”
  • Duncan says you aren’t responding to him properly - he does not believe this is a disagreement but a norm-violation
  • You say that Duncan is not welcome to prosecute norm violations on your wall unless they are norms that you support
comment by G Gordon Worley III (gworley) · 2019-09-11T15:19:12.165Z · score: 4 (2 votes) · LW · GW

Yes, that matches my own reading of how the interaction progressed, caveat any misunderstanding I have of Duncan's intent.

comment by Ben Pace (Benito) · 2019-09-11T17:33:48.345Z · score: 13 (3 votes) · LW · GW

nods Then I suppose I feel confused by your final response.

If I imagine writing a shortform post and someone said it was:

  • Very rude to another member of the community
  • Endorsing a study that failed to replicate
  • Lied about an experience of mine
  • Tried to unfairly change a narrative so that I was given more status

I would often be like “No, you’re wrong” or maybe “I actually stand by it and intended to be rude” or “Thanks, that’s fair, I’ll edit”. I can also imagine times where the commenter is needlessly aggressive and uncooperative where I’d just strong downvote and ignore.

But I’m confused by saying “you’re not allowed to tell me off for norm-violations on my shortform”. To apply that principle more concretely, it could say “you’re not allowed to tell me off for lying on my shortform”.

My actual model of you feels a bit confused by Duncan’s claim or something, and wants to fight back against being attacked for something you don’t see as problematic. Like, it feels presumptuous of Duncan to walk into your post and hold you to what feels mostly like high standards of explanation, and you want to (rightly) say that he’s not allowed to do that.

Does that all seem right?

comment by G Gordon Worley III (gworley) · 2019-09-11T21:59:03.998Z · score: 1 (3 votes) · LW · GW

Yes. To add to this what I'm most strongly reacting to is not what he says he's doing explicitly, which I'm fine with, but what further conversation suggests he is trying to do: to act as norm enforcer rather than as norm enforcement recommender.

comment by Duncan_Sabien · 2019-09-11T22:05:19.224Z · score: 5 (2 votes) · LW · GW

I explicitly reject Gordon's assertions about my intentions as false, and ask (ASK, not demand) that he justify (i.e. offer cruxes) or withdraw them.

comment by G Gordon Worley III (gworley) · 2019-09-11T23:49:31.065Z · score: 3 (6 votes) · LW · GW

I cannot adequately do that here because it relies on information you conveyed to me in a non-public conversation.

I accept that you say that's not what you're doing, and I am happy to concede that your internal experience of yourself as you experience it tells you that you are doing what you are doing, but I now believe that my explanation better describes why you are doing what you are doing than the explanation you are able to generate to explain your own actions.

The best I can maybe offer is that I believe you have said things that are better explained by an intent to enforce norms rather than argue for norms and imply that general case should be applied in this specific case. I would say the main lines of evidence revolve around how I interpret your turns of phrase, how I read your tone (confrontational and defensive), what aspects of things I have said you have chosen to respond to, how you have directed the conversation, and my general model of human psychology with the specifics you are giving me filled in.

Certainly I may be mistaken in this case and I am reasoning off circumstantial evidence which is not a great situation to be in, but you have pushed me hard enough here and elsewhere that it has made me feel it is necessary to act to serve the purpose of supporting the conversation norms I prefer in the places you have engaged me. I would actually really like this conversation to end because it is not serving anything I value, other than that I believe not responding would simply allow what I dislike to continue and be subtly accepted, and I am somewhat enjoying the opportunity to engage in ways I don't normally so I can benefit from the new experience.

comment by Duncan_Sabien · 2019-09-12T00:21:46.179Z · score: 4 (4 votes) · LW · GW

I note for the record that the above is strong evidence that Gordon was not just throwing an offhand turn of phrase in his original post; he does and will regularly decide that he knows better than other people what's going on in those other people's heads. The thing I was worried about, and attempting to shine a light on, was not in my imagination; it's a move that Gordon endorses, on reflection, and it's the sort of thing that, historically, made the broader culture take forever to recognize e.g. the existence of people without visual imagery, or the existence of episodics, or the existence of bisexuals, or any number of other human experiences that are marginalized by confident projection.

I'm comfortable with just leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move." Personally, I find it unjustifiable and morally abhorrent. Gordon clearly does not. Maybe that's the crux.

comment by Vladimir_Nesov · 2019-09-12T01:30:38.792Z · score: 19 (7 votes) · LW · GW

[He] does and will regularly decide that he knows better than other people what's going on in those other people's heads. [...] Personally, I find it unjustifiable and morally abhorrent.

How can it be morally abhorrent? It's an epistemic issue. Factual errors often lead to bad consequences, but that doesn't make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what's going on. And what someone thinks is a factual question, not a question of assigning value to an event.

comment by Wei_Dai · 2019-09-12T02:03:07.312Z · score: 10 (4 votes) · LW · GW

Things that are morally abhorrent are not necessarily moral errors. For example I can find wildlife suffering morally abhorrent but there's obviously no moral errors or any kind of errors being committed there. Given that the dictionary defines abhorrent as "inspiring disgust and loathing; repugnant" I think "I find X morally abhorrent" just means "my moral system considers X to be very wrong or to have very low value."

comment by Vladimir_Nesov · 2019-09-12T02:38:31.260Z · score: 7 (4 votes) · LW · GW

That's one way for my comment to be wrong, as in "Systematic recurrence of preventable epistemic errors is morally abhorrent."

When I was writing the comment, I was thinking of another way it's wrong: given morality vs. axiology distinction, and distinction between belief and disclosure of that belief, it might well be the case that it's a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it's a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn't be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)

comment by Duncan_Sabien · 2019-09-12T08:34:55.065Z · score: 4 (2 votes) · LW · GW

I find it morally abhorrent because, when not justified and made-cruxy (i.e. when done the only way I've ever seen Gordon do it), it's tantamount to trying to erase another person/another person's experience, and (as noted in my first objection) it often leads, in practice, to socially manipulative dismissiveness and marginalization that's not backed by reality.

comment by Vladimir_Nesov · 2019-09-12T14:27:18.916Z · score: 11 (6 votes) · LW · GW

So it's a moral principle under the belief vs. declaration distinction (as in this comment [LW · GW]). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).

Personally I don't think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn't cause as much collateral damage.

comment by Duncan_Sabien · 2019-09-13T05:44:19.954Z · score: 6 (3 votes) · LW · GW

I'm not sure I'm exactly responding to what you want me to respond to, but:

It seems to me that a declaration like "I think this is true of other people in spite of their claims to the contrary; I'm not even sure if I could justify why? But for right now, that's just the state of what's in my head"

is not objectionable/doesn't trigger the alarm I was trying to raise. Because even though it fails to offer cruxes or detail, it at least signals that it's not A STATEMENT ABOUT THE TRUE STATE OF THE UNIVERSE, or something? Like, it's self-aware about being a belief that may or may not match reality?

Which makes me re-evaluate my response to Gordon's OP and admit that I could have probably offered the word "think" something like 20% more charity, on the same grounds, though on net I still am glad that I spelled out the objection in public (like, the objection now seems to me to apply a little less, but not all the way down to "oops, the objection was fundamentally inappropriate").

comment by Vladimir_Nesov · 2019-09-13T16:48:51.968Z · score: 9 (4 votes) · LW · GW

(By "belief" I meant a belief that talkes place in someone's head, and its existence is not necessarily communicated to anyone else. So an uttered statement "I think X" is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person's mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.)

I think a salient distinction between declarations of "I think X" and "it's true that X" is a bad thing, as described in this comment [LW · GW]. The distinction is that in the former case you might lack arguments for the belief. But if you don't endorse the belief, it's no longer a belief, and "I think X" is a bug in the mind that shouldn't be called "belief". If you do endorse it, then "I think X" does mean "X". It is plausibly a true statement about the state of the universe, you just don't know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation.

So the statement "I think this is true of other people in spite of their claims to the contrary" should mean approximately the same as "This is true of other people in spite of their claims to the contrary", and a meaningful distinction only appears with actual arguments about those statements, not with different placement of "I think".

comment by G Gordon Worley III (gworley) · 2019-09-13T22:43:17.859Z · score: 8 (4 votes) · LW · GW

I forget if we've talked about this specifically before, but I rarely couch things in ways that make clear I'm talking about what I think rather than what is "true" unless I am pretty uncertain and want to make that really clear or expect my audience to be hostile or primarily made up of essentialists. This is the result of having an epistemology where there is no direct access to reality so I literally cannot say anything that is not a statement about my beliefs about reality, so saying "I think" or "I believe" all the time is redundant because I don't consider eternal notions of truth meaningful (even mathematical truth, because that truth is contingent on something like the meta-meta-physics of the world and my knowledge of it is still mediated by perception, cf. certain aspects of Tegmark).

I think of "truth" as more like "correct subjective predictions, as measured against (again, subjective) observation", so when I make claims about reality I'm always making what I think of as claims about my perception of reality since I can say nothing else and don't worry about appearing to make claims to eternal, essential truth since I so strongly believe such a thing doesn't exist that I need to be actively reminded that most of humanity thinks otherwise to some extent. Sort of like going so hard in one direction that it looks like I've gone in the other because I've carved out everything that would have allowed someone to observe me having to navigate between what appear to others to be two different epistemic states where I only have one of them.

This is perhaps a failure of communication, and I think I speak in ways in person that make this much clearer and then I neglect the aspects of tone not adequately carried in text alone (though others can be the judge of that, but I basically never get into discussions about this concern in person, even if I do get into meta discussions about other aspects of epistemology). FWIW, I think Eliezer has (or at least had) a similar norm, though to be fair it got him into a lot of hot water too, so maybe I shouldn't follow his example here!

comment by Zack_M_Davis · 2019-09-12T02:13:09.913Z · score: 13 (8 votes) · LW · GW

leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move."

Nesov scooped me [LW · GW] on the obvious objection, but as long as we're creating common knowledge [LW · GW], can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what's going on in those other people's heads when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).

the existence of bisexuals

As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of "Don't decide that you know other people's minds better than they do" performs better or worse than other inference procedures.


  1. J. Michael Bailey, "What Is Sexual Orientation and Do Women Have One?", section titled "Sexual Arousal Patterns vs. the Kinsey Scale: The Case of Male Bisexuality" ↩︎

comment by Duncan_Sabien · 2019-09-12T08:32:46.114Z · score: 4 (2 votes) · LW · GW
when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).

Yes, as clearly noted in my original objection, there is absolutely a time and a place for this, and a way to do it right; I too share this tool when able and willing to justify it. It's only suspicious when people throw it out solely on the strength of their own dubious authority. My whole objection is that Gordon wasn't bothering to (I believe as a cover for not being able to).

comment by Zack_M_Davis · 2019-09-12T15:06:30.513Z · score: 13 (4 votes) · LW · GW

as clearly noted in my original objection

Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters [LW · GW], and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)

there is absolutely a time and a place for this

Exactly—and lesswrong.com is the place for people to report on their models of reality, which includes their models of other people's minds as a special case.

Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness [LW · GW]! But in my rationalist culture, while standing in the Citadel of Truth [LW · GW], we're not allowed to care whether a map is marginalizing or dismissive; we're only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the "rationalist" brand name, then my culture is at war with them.)

My whole objection is that Gordon wasn't bothering to

Great! Thank you for critcizing people who don't justify their beliefs with adequate evidence and arguments. That's really useful for everyone reading!

(I believe as a cover for not being able to).

In context, it seems worth noting that this is a claim about Gordon's mind, and your only evidence for it is absence-of-evidence (you think that if he had more justification, he would be better at showing it). I have no problem with this (as we know, absence of evidence is evidence of absence), but it seems in tension with some of your other claims?

comment by Vladimir_Nesov · 2019-09-12T15:55:52.237Z · score: 10 (5 votes) · LW · GW

criticizing people who don't justify their beliefs with adequate evidence and arguments

I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It's concise, and if handled carefully, it can be sufficient for communication. As evidence, it's a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn't yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that's only a small part of how human intelligence builds its map.

comment by Duncan_Sabien · 2019-09-13T05:47:53.251Z · score: 2 (1 votes) · LW · GW

Yeah, if I had the comment to rewrite (I prefer not to edit it at this point) I would say "My whole objection is that Gordon wasn't bothering to (and at this point in the exchange I have a hypothesis that it's reflective of not being able to, though that hypothesis comes from gut-level systems and is wrong-until-proven-right as opposed to, like, a confident prior)."

comment by G Gordon Worley III (gworley) · 2019-09-13T22:59:14.192Z · score: 7 (4 votes) · LW · GW

So, having a little more space from all this now, I'll say that I'm hesitant to try to provide justifications because certain parts of the argument require explaining complex internal models of human minds that are a level more complex than I can explain even though I'm using them (I only seem to be able to interpret myself coherently one level of organization less than the maximum level of organization present in my mind) and because other parts of the argument require gnosis of certain insights that I (and to the best of my knowledge, no one) knows how to readily convey without hundreds to thousands of hours of meditation and one-on-one interactions (though I do know a few people who continue to hope that they may yet discover a way to make that kind of thing scalable even though we haven't figured it out in 2500 years, maybe because we were missing something important to let us do it).

So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to. I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). So given that, I can see why from your point of view it looks like I'm just making stuff up or worse since I can't offer "justified belief" that you'd accept as "justified", and I'm not really much interested in this particular case in changing your mind as I don't yet completely know myself how to generate that change in stance towards epistemology in others even though I encountered evidence that lead me to that conclusion myself.

comment by Vaniver · 2019-09-14T00:15:34.972Z · score: 23 (8 votes) · LW · GW

There's a dynamic here that I think is somewhat important: socially recognized gnosis.

That is, contemporary American society views doctors as knowing things that laypeople don't know, and views physicists as knowing things that laypeople don't know, and so on. Suppose a doctor examines a person and says "ah, they have condition X," and Amy responds with "why do you say that?", and the doctor responds with "sorry, I don't think I can generate a short enough explanation that is understandable to you." It seems like the doctor's response to Amy is 'socially justified', in that the doctor won't really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There's an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by 'immediate public justification' or something similar.

But then there's a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their astronomy and have their unintelligibility be respected, while we don't want to respect the unintelligibility of astrologers.

So far I've been talking 'nationally' or 'globally' but I think a similar question holds locally. Do we want it to be the case that 'rationalists as a whole' think that meditators have gnosis and that this is respectable, or do we want 'rationalists as a whole' to think that any such respect is provisional or 'at individual discretion' or a mistake?

That is, when you say:

I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan).

I feel hopeful that we can settle whether or not this is a problem (or at least achieve much more mutual understanding and clarity).

So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to.

This feels like the more important part ("if you don't have episteme, why do you believe it?") but I think there's a nearly-as-important other half, which is something like "presenting as having respected gnosis" vs. "presenting as having unrespected gnosis." If you're like "as a doctor, it is my considered medical opinion that everyone has spirituality", that's very different from "look, I can't justify this and so you should take it with a grain of salt, but I think everyone secretly has spirituality". I don't think you're at the first extreme, but I think Duncan is reacting to signals along that dimension.

comment by Vladimir_Nesov · 2019-09-12T14:33:59.451Z · score: 6 (3 votes) · LW · GW

there is absolutely a time and a place for this

That's not the point! Zack is talking about beliefs, not their declaration, so it's (hopefully) not the case that there is "a time and a place" for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of "justify" and "belief").

comment by Duncan_Sabien · 2019-09-12T00:42:13.684Z · score: 4 (5 votes) · LW · GW

Oh, one last footnote: at no point did I consider the other conversation private, at no point did I request that it be kept private, and at no point did Gordon ask if he could reference it (to which I would have said "of course you can"). i.e. it's not out of respect for my preferences that that information is not being brought in this thread.

comment by G Gordon Worley III (gworley) · 2019-09-12T03:59:38.816Z · score: 2 (3 votes) · LW · GW

Correct, it was made in a nonpublic but not private conversation, so you are not the only agent to consider, though admittedly the primary one other than myself in this context. I'm not opposed to discussing disclosure, but I'm also happy to let the matter drop at this point since I feel I have adequately pushed back against the behavior I did not want to implicitly endorse via silence since that was my primary purpose in continuing these threads past the initial reply to your comment.

comment by Duncan_Sabien · 2019-09-11T07:12:50.208Z · score: 3 (3 votes) · LW · GW

There's a world of difference between someone saying "[I think it would be better if you] cut it out because I said so" and someone saying "[I think it would be better if you] cut it out because what you're doing is bad for reasons X, Y, and Z." I didn't bother to spell out that context because it was plainly evident in the posts prior. Clearly I don't have any authority beyond the ability to speak; to

claim or argue that I am dangerous in some way

IS what I was doing, and all I was doing.

comment by G Gordon Worley III (gworley) · 2019-09-11T15:24:16.300Z · score: 4 (3 votes) · LW · GW

I mostly disagree that better reasons matter in a relevant way here, especially since I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced but instead a bid to enforce that norm. To me what's relevant is intended effect.

comment by elityre · 2019-09-12T09:10:13.477Z · score: 17 (4 votes) · LW · GW

What's the difference?

Suppose I'm talking with a group of loose acquaintances, and one of them says (in full seriousness), "I'm not homophobic. It's not that I'm afraid of gays, I just think that they shouldn't exist."

It seem to me that it is appropriate for me to say, "Hey man, that's not ok to say." It might be that a number of other people in the conversation would back me up (or it might be that they they defend the first guy), but there wasn't common knowledge of that fact beforehand.

In some sense, this is a bid to establish a new norm, by pushing a the private opinions of a number of people into common knowledge. It also seems to me to be a virtuous thing to do in many situations.

(Noting that my response to the guy is not: "Hey, you can't do that, because I get to decide what people do around here." It's "You can't do that, because it's bad" and depending on the group to respond to that claim in one way or another.)




comment by Duncan_Sabien · 2019-09-11T15:28:38.229Z · score: 7 (1 votes) · LW · GW

"Here are some things you're welcome to do, except if you do them I will label them as something else and disagree with them."

Your claim that you had tentative conclusions that you were willing to update away from is starting to seem like lip service.

I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced

Literally my first response to you centers around the phrase "I think it's a good and common standard to be skeptical of (and even hostile toward) such claims." That's me saying "I think there's a norm here that it's good to follow," along with detail and nuance à la here's when it's good not to follow it.


comment by G Gordon Worley III (gworley) · 2019-09-11T16:22:34.594Z · score: 4 (3 votes) · LW · GW

This is a question of inferred intent, not what you literally said. I am generally hesitant to take much moderation action based on what I infer, but you have given me additional reason to believe my interpretation is correct in a nonpublic thread on Facebook.

(If admins feel this means I should use a reign of terror moderation policy I can switch to that.)

Regardless, I consider this a warning of my local moderation policy only and don't plan to take action on this particular thread.

comment by Ben Pace (Benito) · 2019-09-11T18:48:17.859Z · score: 4 (2 votes) · LW · GW

Er, I generally have FB blocked, but I have now just seen the thread on FB that Duncan made about you, and that does change how I read the dialogue (it makes Duncan’s comments feel more like they’re motivated by social coordination around you rather than around meditation/spirituality, which I’d previously assumed).

(Just as an aside, I think it would’ve been clearer to me if you’d said “I feel like you’re trying to attack me personally for some reason and so it feels especially difficult to engage in good faith with this particular public accusation of norm-violation” or something like that.)

I may make some small edit to my last comment up-thread a little after taking this into account, though I am still curious about your answer to the question as I initially stated it.

comment by Duncan_Sabien · 2019-09-11T21:07:21.585Z · score: 3 (3 votes) · LW · GW

I can have different agendas and follow different norms on different platforms. Just saying. If I were trying to do the exact same thing in this thread as I am in the FB thread, they would have the same words, instead of different words.

(The original objection *does* contain the same words, but Gordon took the conversation in meaningfully different directions on the two different platforms.)

I note that above, Gordon is engaging in *exactly* the same behavior that I was trying to shine a spotlight on (claiming to understand my intent better than I do myself/holding to his model that I intend X despite my direct claims to the contrary).

comment by jimrandomh · 2019-09-12T00:20:15.389Z · score: 11 (5 votes) · LW · GW

Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.

(Some of the meta-level fighting seemed not-fine, but that's for another comment.)

comment by Viliam · 2019-09-08T21:24:28.566Z · score: 5 (3 votes) · LW · GW

Seems to me that modern life full of distractions. As a smart person, you probably have a work that requires thinking (not just moving your muscles in a repetitive way). In your free time there is internet with all the websites optimized for addictiveness. Plus all the other things you want to do (books to read, movies to see, friends to visit). Electricity can turn your late night into a day; you can take a book or a smartphone everywhere.

So, unless we choose it consciously, there are no silent moments, to get in contact with yourself... or whatever higher power you imagine there to be, talking to you.

I wonder what is the effect ratio between meditation and simply taking a break and wondering about stuff. Maybe it's our productivity-focused thinking saying that meditating (doing some hard work in order to gain supernatural powers) is a worthy endeavor, while goofing off is a sin.

comment by G Gordon Worley III (gworley) · 2019-09-09T20:41:49.283Z · score: 3 (2 votes) · LW · GW

"Simply taking a break and wondering about stuff" is a decent way to get in touch with this thing I'm pointing at. The main downside to it is that it's slow, in that for it to produce effects similar to meditation probably requires an order of magnitude more time, and likely won't result in the calmest brain states where you can study your phenomenology clearly.

comment by Xenotech · 2019-09-08T02:07:56.443Z · score: 1 (1 votes) · LW · GW

Are there individuals willing to explicitly engage in comforting discussion regarding these things you've written about? Any willing to extend personal invitations?

I would love to discuss spirituality with otherwise "rational" intelligent people.

Please consider teaching out to me personally - it would be transformative: drawnalong@gmail.com

comment by G Gordon Worley III (gworley) · 2019-09-08T23:00:34.693Z · score: 6 (3 votes) · LW · GW

If CAIS if sufficient for AGI, then likely humans are CAIS-style general intelligences.

comment by mr-hire · 2019-09-09T15:44:55.061Z · score: 10 (3 votes) · LW · GW

What's the justification for this? Seems pretty symmetric to "If wheels are sufficient for getting around, then its' likely humans evolved to use wheels."

comment by G Gordon Worley III (gworley) · 2019-09-09T20:38:10.957Z · score: 2 (1 votes) · LW · GW

Human brains look like they are made up of many parts with various levels and means of integration. So if it turns out to be the case that we could build something like AGI via CAIS, that is CAIS can be assembled in a way that result in general intelligence, then I think it's likely that human intelligence doesn't have anything special going on that would meaningfully differentiate it from the general notion of CAIS other than being implemented in meat.

comment by G Gordon Worley III (gworley) · 2019-08-30T18:41:03.225Z · score: 6 (4 votes) · LW · GW

Strong and Weak Ontology

Ontology is how we make sense of the world. We make judgements about our observations and slice up the world into buckets we can drop our observations into.

However I've been thinking lately that the way we normally model ontology is insufficient. We tend to talk as if ontology is all one thing, one map of the territory. Maybe these can be very complex, multi-manifold maps that permit shifting perspectives, but one map all the same.

We see some hints at the breaking of this ontology of ontology as a single map by noticing the way some people, myself included, have noticed you can hold multiple, contradictory ontologies and switch between them. And with further development there's no switching, it just all is, only complex and with multiple projections that overlap.

But there's more. What we've been talking about here has mostly been a "strong" form of ontology that seeks to say something about the being of the world, to reify it into type-objects that can be considered, but there's also a "weak" kind of ontology from which ontology arises and which can exist without the "strong" version. It's the ontology that I referenced at the start of the post, the ontology of discrimination and nothing else. So much of ontology is taking the discrimination and turning it into a full-fledged model or map, but there's a weak notion of ontology that exists even if all we do is draw lines where we see borders.

I can't recall seeing much on this in Western philosophy; I thought about this after combining my reading of Western philosophy with my reading of Buddhist philosophy and what it has to say about how mental activity arises. But Buddhist philosophy doesn't have a strong notion of ontology the way Western philosophy does, so maybe it's not surprising this subtle point has gone missed.

comment by G Gordon Worley III (gworley) · 2019-08-07T01:54:55.393Z · score: 6 (4 votes) · LW · GW

So long as shortform is salient for me, might as well do another one on a novel (in that I've not heard/seen anyone express it before) idea I have about perceptual control theory, minimization of prediction error/confusion, free energy, and Buddhism that I was recently reminded of.

There is a notion within Mahayana Buddhism of the three poisons: ignorance, attachment (or, I think we could better term this here, attraction, for reasons that will become clear), and aversion. This is part of one model of where suffering arises from. Others express these notions in other ways, but I want to focus on this way of talking about these root kleshas (defilements, afflictions, mind poisons) because I think it has a clear tie in with this other thing that excites me, the idea that the primary thing that neurons seek to do is minimize prediction error.

Ignorance, even among the three poisons, is generally considered more fundamental, in that ignorance appears first and it gives rise to attraction and aversion (in some models there is fundamental ignorance that gives rise to the three poisons, marking a separation between ignorance as mental activity and ignorance as a result of the physical embodiment of information transfer). This looks to me a lot like what perceptual control theory predicts if the thing being controlled for is minimization of prediction error: there is confusion about the state of the world, information comes in, and this sends a signal within the control system of neurons to either up or down regulate something. Essentially what the three poisons describe is what you would expect the world to look like if the mind were powered by control systems trying to minimize confusion/ignorance, nudging the system toward and away from a set point where prediction error is minimized via negative feedback (and a small bonus, this might help explain why the brain doesn't tend to get into long-lasting positive feedback loops: it's not constructed for it and before long you trigger something else to down-regulate because you violate its predictions).

It also makes a lot of sense that these would be the root poisons. I think we can forgive 1st millennium Buddhists for not discovering PCT or minimization of prediction error directly, but we should not be surprised that they identified the mental actions this theory predicts should be foundational to the mind and also recognized that they were foundational actions to all others. Elsewhere, Buddhism explicitly calls out ignorance as the fundamental force driving dukkha (suffering), though we probably shouldn't assign too many points to (non-Madhyamaka) Buddhism for noticing this since other Buddhist theories don't make this same claims about attachment and aversion and they are used concurrently in explication of the dharma.

comment by G Gordon Worley III (gworley) · 2019-11-20T02:11:10.836Z · score: 2 (1 votes) · LW · GW

Story stats are my favorite feature of Medium. Let me tell you why.

I write primarily to impact others. Although I sometimes choose to do very little work to make myself understandable to anyone who is more than a few inferential steps behind me and then write out on a far frontier of thought, nonetheless my purpose remains sharing my ideas with others. If it weren't for that, I wouldn't bother to write much at all, and certainly not in the same way as I do when writing for others. Thus I care instrumentally a lot about being able to assess if I am having the desired impact so that I can improve in ways that might help serve my purposes.

LessWrong provides some good, high detail clues about impact: votes and comments. Comments on LW are great, and definitely better in quality and depth of engagement than what I find other places. Votes are also relatively useful here, caveat the weaknesses of LW voting I've talked about before. If I post something on LW and it gets lots of votes (up or down) or lots of comments, relative to what other posts receive, then I'm confident people have read what I wrote and I impacted them in some way, whether or not it was in the way I had hoped.

That's basically where story stats stop on LessWrong. Here's a screen shot of the info I get from Medium:

For each story you can see a few things here: views, reads, read ratio, and fans, which is basically likes. I also get an email every week telling me about the largest updates to my story stats, like how many additional views, reads, and fans a story had in the last week.

If I click the little "Details" link under a story name I get more stats: average read time, referral sources, internal vs. external views (external views are views on RSS, etc.), and even a list of "interests" associated with readers who read my story.All of this is great. Each week I get a little positive reward letting me know what I did that worked, what didn't, and most importantly to me, how much people are engaging with things I wrote.

I get some of that here on LessWrong, but not all of it. Although I've bootstrapped myself now to a point where I'll keep writing even absent these motivational queues, I still find this info useful for understanding what things I wrote that people liked best or found most useful and what they found least useful. Some of that is mirrored here by things like votes, but it doesn't capture all of it.

I think it would be pretty cool if I could see more stats about my posts on LessWrong similar to what I get on Medium, especially view and read counts (knowing that "reads" is a ultimately a guess based on some users allowing Javascript that lets us guess that they read it).