G Gordon Worley III's Shortform
post by Gordon Seidoh Worley (gworley) · 2019-08-06T20:10:27.796Z · LW · GW · 149 commentsContents
149 comments
149 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2019-08-21T12:54:12.964Z · LW(p) · GW(p)
Some thoughts on Buddhist epistemology.
This risks being threatening, upsetting, and heretical within a certain point of view I commonly see expressed on LW for reasons that will become clear if you keep reading. I don't know if that means you shouldn't read this if that sounds like the kind of thing you don't want to read, but I put it out there so you can make the choice without having to engage in the specifics if you don't want to. I don't think you will be missing out on anything if that warning gives you a tinge of "maybe I won't like reading this".
My mind produces a type error when people try to perform deep and precise epistemic analysis of the dharma. That is, when they try to evaluate the truth of claims made by the dharma this seems generally fine, but when they go deep enough that they end up trying to evaluate whether the dharma itself is based on something true, I get the type error.
I'm not sure what people trying to do this turn up. My expectation is that their results looks like noise if you aggregate over all such attempts. The reason being that the dharma is not founded on episteme.
As a quick reminder, there are at least three categories of knowledge worth considering: doxa, episteme, and gnosis. Doxa might translate as "hearsay" in English; it's about statements of the truth. Episteme is knowledge you come to believe via evaluation of the truth. Gnosis is direct, unmediated-by-ontology knowledge of reality. To this I'll also distinguish techne from episteme, the former being experienced knowledge and the latter being reasoned knowledge.
I'll make the probably not very bold claim that most LW rationalists value episteme above all else, accept techne as evidence, accept doxa as evidence about evidence and only weak evidence of truth itself, and mostly ignore gnosis because it is not "rational" in the sense that it cannot be put into words and it can only be pointed at by words and so cannot be analyzed because there is no ontology or categorization to allow making claims one way or the other about it.
Buddhist philosophy values gnosis above all else, then techne, then doxa, then episteme.
To say a little more, the most important thing in Buddhist thinking is seeing reality just as it is, unmediated by the "thinking" mind, by which we really mean the acts of discrimination, judgement, categorization, and ontology. To be sure, this "reality" is not external reality, which we never get to see directly, but rather our unmediated contact with it via the senses. But for all the value of gnosis, unless you plan to sit on a lotus flower in perfect equanimity forever and never act in the world, it's not enough. Techne is the knowledge we gain through action in the world, and although it does pass judgement and discriminate, it also stays close to the ground and makes few claims. It is deeply embodied in action itself.
I'd say doxa comes next because there is a tradition of passing on the words of enlightened people as they said them and acting, at least some of the time, as if they were 100% true. Don't confuse this for just letting anything in, though: the point is to trust in the words of those who have come before and seen more than you and doing that is often very helpful to learning to see that which was previously invisible for yourself, but it is always an action you do yourself not contingent on the teachings since those only pointed you towards where to look and always failed to put into words (because it was impossible) what you would find. The old story was that the Buddha, when asked why he should be believed, said don't: try it for yourself and see what you find.
Episteme is last, and that's because it's not to be trusted. Of all the ways of knowing, episteme is the least grounded in reality. This should not be surprising, but it might be, so I'll say a bit about it. Formal methods are not grounded. There's a reason the grounding problem, epistemic circularity, the problem of the criterion, the problem of finding the universal prior, etc. remain fundamentally unsolved: they are unsolvable in a complete and adequate way. Instead we get pragmatic solutions that cross the chasm between reality and belief, between noumena and phenomena, between the ontic and ontology, and this leap of faith means episteme is always contingent on that leap. Even as it proves things we verify by other means, we must be careful because it's not grounded and we have to check everything it produces by other means. This means going all the way down to gnosis if possible, and techne at the least.
None of this it to say that episteme is not useful for many things and making predictions, but we hold it at arms length because of its powerful ability to confuse us if we didn't happen to make the right leaps where we pragmatically had to. It also always leaves something out because it requires distinctions to function, so it is always less complete. At the same time, it often makes predictions that turn out to be true, and the world is better for our powerful application of it. We just have to keep in mind what it is and what it can do and what its dangers are and engage with it in a thoughtful, careful way to avoid getting lost and confusing our perception of reality for reality itself.
So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. It's grounded on gnosis and techne, presented via doxa, and only after the fact might we try to extend it via episteme to get an idea of where to look to understand it better. Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead.
So where does this leave us? If you want to evaluate the dharma, you'll have to do it yourself. You can't argue about it or reason it, you have to sit down and look at the nature of reality without conceptualizing it. Maybe that means you won't engage with it since it doesn't choose to accept the framing of episteme. That seems fine if you are so inclined. But then don't be surprised if the dharma claims you are closed minded, if you feel like it attacks your identity, and if it feels just true enough that you can't easily dismiss it out of hand although you might like to.
Replies from: Kaj_Sotala, romeostevensit, Viliam, Chris_Leong, Ouroborus, hamnox↑ comment by Kaj_Sotala · 2019-08-21T18:38:08.438Z · LW(p) · GW(p)
So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. [...] Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead.
In my view, this seems like a clear failing. The fact that the dharma comes from a tradition where this has usually been the case is not an excuse for not trying to fix it.
Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.
This is not just a question of "the dharma has to be able to justify itself"; it's also a question of leaving out the episteme component leaves the system impoverished, as noted e.g. here:
Recurrent training to attend to the sensate experience moment-by-moment can undermine the capacity to make meaning of experience. (The psychoanalyst Wilfred Bion described this as an ‘attack on linking’, that is, on the meaning-making function of the mind.) When I ask these patients how they are feeling, or what they are thinking, or what’s on their mind, they tend to answer in terms of their sensate experience, which makes it difficult for them to engage in a transformative process of psychological self-understanding.
and here:
In important ways, it is not possible to encounter our unconscious – at least in the sense implied by this perspective – through moment-to-moment awareness of our sensate experience. Yes, in meditation we can have the experience of our thoughts bubbling just beneath the surface – what Shinzen Young calls the brain’s pre-processing – but this is not the unconscious that I’m referring to, it, or at least not all of it.
Let me give an example. Suppose that I have just learned that a close friend has died. I’m deeply saddened by this news. Moments later, I spill a cup of coffee on my new pants and become quite angry. Let’s further suppose that, throughout my life, I’ve had difficulty feeling sadness. For reasons related to my personal history, sadness frightens me. In my moment of anger, if I adopt the perspective of awareness of sensate experience, moment-by-moment, then I will have no access to the fact that I am sad. On the contrary, my sensate experience seems to reflect the fact that I am angry. But given what I know about myself, it’s quite reasonable to posit that my anger is a defense against the feeling of sadness, a feeling of which I am unconscious as I am caught up in my anger.Replies from: gworley
↑ comment by Gordon Seidoh Worley (gworley) · 2019-08-21T19:31:50.784Z · LW(p) · GW(p)
Hmm, I feel like there's multiple things going on here, but I think it hinges on this:
Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.
Different traditions vary on how much to emphasize models and episteme. None of them completely ignore it, though, only seek to keep it within its proper place. It's not that episteme is useless, only that it is not primary. You of course should include it because it's part of the world, and to deny it would lead to confusion and suffering. As you note with your first example especially, some people learn to turn off the discriminating mind rather than hold it as object, and they are worse for it because then they can't engage with it anymore. Turning it off is only something you could safely do if you really had become so enlightened that you had no shadow and would never accumulate any additional shadow, and even then it seems strange from where I stand to do that although maybe it would make sense to me if I were in the position that it were a reasonable and safe option.
So to me this reads like an objection to a position I didn't mean to take. I mean to say episteme has a place and is useful, it is not taken as primary to understanding, at some points Buddhist episteme will say contradictory things, that's fine and expected because dharma episteme is normally post hoc rather than ante hoc (though is still expected to be rational right up until it is forced to hit a contradiction), and ante hoc is okay so long as it is then later verified via gnosis or techne.
↑ comment by romeostevensit · 2019-09-11T15:42:32.267Z · LW(p) · GW(p)
>unmediated-by-ontology knowledge of reality.
I think this is a confused concept, related to wrong-way-reduction.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-21T22:10:28.798Z · LW(p) · GW(p)
I've thought about this a bit and I don't see a way through to what you are thinking that makes you suggest this since I don't see a reduction happening here, much less one moving towards bundling together confusion that only looks simpler. Can you say a bit more that might make your perspective on this clearer?
Replies from: romeostevensit↑ comment by romeostevensit · 2019-09-25T00:34:58.925Z · LW(p) · GW(p)
In particular, I think under this formulation knowledge and onotology largely refer to the same thing. Which is part of the reason I think this formulation is mistaken. Separately, I think 'reality' has too many moving parts to be useful for the role it's being used for here.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-26T17:21:14.960Z · LW(p) · GW(p)
Maybe, although I think there is a not very clear distinction I'm trying to make between knowledge and ontological knowledge, though maybe it's not coming across, although if it is and you have some particular argument for why, say, there isn't or can't be such a meaningful distinction, I'd be interested to hear it.
As for my model of reality having too many moving parts, you're right, I'm not totally unconfused about everything yet, and it's the place the remaining confusion lives.
↑ comment by Viliam · 2019-08-21T22:57:21.901Z · LW(p) · GW(p)
the most important thing in Buddhist thinking is seeing reality just as it is, unmediated by the "thinking" mind, by which we really mean the acts of discrimination, judgement, categorization, and ontology. To be sure, this "reality" is not external reality, which we never get to see directly, but rather our unmediated contact with it via the senses.
The "unmediated contact via the senses" can only give you sensual inputs. Everything else contains interpretation. That means, you can only have "gnosis" about things like [red], [warm], etc. Including a lot of interesting stuff about your inner state, of course, but still fundamentally of the type [feeling this], [thinking that], and perhaps some usually-unknown-to-non-Buddhists [X-ing Y], etc.
Poetically speaking, these are the "atoms of experience". (Some people would probably say "qualia".) But some interpretation needs to come to build molecules out of these atoms. Without interpretation, you could barely distinguish between a cat and a warm pillow... which IMHO is a bit insufficient for a supposedly supreme knowledge.
Replies from: romeostevensit↑ comment by romeostevensit · 2020-03-07T22:40:03.177Z · LW(p) · GW(p)
It's even worse than that, 'raw' sensory inputs already have ontological commitments. Those priors inform all our interpretations pre-consciously. Agree that the efficiency of various representations in the context of coherent intents is a good lens.
↑ comment by Chris_Leong · 2019-08-22T00:05:02.729Z · LW(p) · GW(p)
I agree with KaJ Solata and Viliam that episteme is underweighted in Buddhism, but thanks for explicating that world view
↑ comment by Ouroborus · 2020-03-07T18:37:11.863Z · LW(p) · GW(p)
Could you clarify the distinction between techne and gnosis? Is it something like playing around with a hammer and seeing how it works?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-03-09T17:20:35.797Z · LW(p) · GW(p)
It's not a very firm distinction, but techne is knowledge from doing, so I would consider playing with a hammer a way to develop techne. It certainly overlaps with the concept of gnosis, which is a bit more general and includes knowledge from direct experience that doesn't involve "doing", like the kind of knowledge you gain from observing. But the act of observing is a kind of thing you do, so as you see it's fuzzy, but generally I think of techne as that which involves your body moving.
↑ comment by hamnox · 2019-09-25T02:31:11.260Z · LW(p) · GW(p)
I am glad for having read this, but can't formulate my thoughts super clearly. Just have this vague sense that you're using too many groundless words and not connecting to the few threads of gnosis(?) that other rationalists would have available.
comment by Gordon Seidoh Worley (gworley) · 2019-09-09T00:49:00.360Z · LW(p) · GW(p)
If an organism is a thing that organizes, then a thing that optimizes is an optimism.
comment by Gordon Seidoh Worley (gworley) · 2021-02-02T17:22:15.631Z · LW(p) · GW(p)
I'm sad that postrationality/metarationality has, as a movement, started to collapse on itself in terms of doing the thing it started out doing.
What I have in mind is that initially, say 5+ years ago, postrationality was something of a banner for folks who were already in the rationalist or rationalist-adjacent community, saw some ways in which rationalists were failing at their own project, and tried to work on figuring out how to do those things.
Now, much like postmodernism before it, I see postrationality collapsing from a thing only for people who were already rationalists and wanted to go beyond its limitations of the time to a kind of prerationality that rejects instead of builds on the rationalist project.
This kind of dynamic is pretty common (cf. premodern, modern, and postmodern) but it still sucks. On the other hand, I guess the good side of it is that I see lots of signs that the rationality community is better integrating some of the early postrationalist insights such that it feels like there's less to push back against in the median rationalist viewpoint.
Replies from: abramdemski, Viliam↑ comment by abramdemski · 2021-02-02T17:49:10.120Z · LW(p) · GW(p)
Yeah, it seems like postrationalists should somehow establish their rationalist pedigree before claiming the post- title. IIRC, Chapman endorsed this somewhere on twitter? But I can't find it now. Maybe it was a different postrat. Also it was years ago.
↑ comment by Viliam · 2021-02-05T18:45:52.832Z · LW(p) · GW(p)
Are there any specific articles you could point out as good examples of this? I don't remember reading anything about "postrationality" for a year or so -- I actually kinda forgot they exist -- so I am curious what I missed.
I had a weird feeling from the beginning, when it seemed that Chapman -- a leader of a local religious group, if I understand it correctly -- became the key figure of "doing rationality better". On the other hand, it's not like Less Wrong avoided the religious woo completely. Seems like somehow it only became a minor topic here, and maybe more central one among the postrationalists? (Perhaps because other competing topics, such as AI, were missing?)
Also, I suppose that defining yourself in opposition to something is not helpful to actually finding the "middle way". Which is why it was easier for rationalists to accept the good arguments made by postrationalists, than the other way round.
comment by Gordon Seidoh Worley (gworley) · 2020-04-12T19:42:23.079Z · LW(p) · GW(p)
This is a short post to register my kudos to LWers for being consistently pretty good at helping each other find answers to questions, or at least make progress towards answers. I feel like I've used LW numerous times to make progress on work by saying "here's what I got, here's where I'm confused, what do you think?", whether that be through formal question posts or regular posts that are open ended. Some personal examples that come to mind: recent [LW · GW], older [LW · GW], another [LW · GW].
Praise to the LW community!
comment by Gordon Seidoh Worley (gworley) · 2022-05-13T18:22:06.485Z · LW(p) · GW(p)
I'm fairly pessimistic on our ability to build aligned AI. My take is roughly that it's theoretically impossible and at best we might build AI that is aligned well enough that we don't lose. I've not written one thing to really summarize this or prove it, though.
The source of my take comes from two facts:
- Goodharting is robust. That is, the mechanism of Goodharting seems impossible to overcome. Goodharting is just a fact of any control system.
- It's impossible to infer the inner experience (and thus values) of another being perfectly without making normative assumptions.
Stuart Armstrong has made a case for (2) with his no free lunch theorem. I've not seen anyone formally make the case for (1), though.
Is this something worth trying to prove? That Goodharting is unavoidable and at most we can try to contain its effects?
I'm many years out from doing math full time so I'm not sure if I could make a rigorous proof of it, but this seems to be something that people disagree on sometimes (arguing that Goodharting can be overcome) but I think most of those discussions don't get very precise about what that means.
Replies from: samuel-marks, rhollerith_dot_com, rhollerith_dot_com↑ comment by Sam Marks (samuel-marks) · 2022-05-13T23:38:36.940Z · LW(p) · GW(p)
This paper gives a mathematical model of when Goodharting will occur. To summarize: if
(1) a human has some collection of things which she values,
(2) a robot has access to a proxy utility function which takes into account some strict subset of those things, and
(3) the robot can freely vary how much of there are in the world, subject only to resource constraints that make the trade off against each other,
then when the robot optimizes for its proxy utility, it will minimize all 's which its proxy utility function doesn't take into account. If you impose a further condition which ensures that you can't get too much utility by only maximizing some strict subset of the 's (e.g. assuming diminishing marginal returns), then the optimum found by the robot will be suboptimal for the human's true utility function.
That said, I wasn't super-impressed by this paper -- the above is pretty obvious and the mathematical model doesn't elucidate anything, IMO.
Moreover, I think this model doesn't interact much with the skeptical take about whether Goodhart's Law implies doom in practice. Namely, here are some things I believe about the world which this model doesn't take into account:
(1) Lots of the things we value are correlated with each other over "realistically attainable" distributions of world states. Or in other words, for many pairs of things we care about, it is hard (concretely, requires a very capable AI) to increase the amount of without also increasing the amount of .
(2) The utility functions of future AIs will be learned from humans in such a way that as the capabilities of AI systems increase, so will their ability to model human preferences.
If (1) is true, then for each given capabilities level, there is some room for error for our proxy utility functions (within which an agent at that capabilities level won't be able to decouple our proxy utility function from our true utility function); this permissible error margin shrinks with increasing capabilities. If you buy (2), then you might additionally think that the actual error margin between learned proxy utility functions and our true utility function will shrink more rapidly than the permissible error margin as AI capabilities grow. (Whether or not you actually do believe that value learning will beat capabilities in this race probably depends on a whole lot of other empirical beliefs, or so it seems to me.)
This thread [LW(p) · GW(p)] (which you might have already seen) has some good discussion about whether Goodharting will be a big problem in practice.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2022-05-15T17:29:46.181Z · LW(p) · GW(p)
I actually don't think that model is general enough. Like, I think Goodharting is just a fact of control system's observing.
Suppose we have a simple control system with output and a governor . takes a measurement (an observation) of . So long as is not error free (and I think we can agree that no real world system can be actually error free), then for some error factor . Since uses to regulate the system to change , we now have error influencing the value of . Now applying the standard reasoning for Goodhart, in the limit of optimization pressure (i.e. regulating the value of for long enough), comes to dominate the value of .
This is a bit handwavy, but I'm pretty sure it's true, which means in theory any attempt to optimize for anything will, under enough optimization pressure, become dominated by error, whether that's human values or something else. The only interesting question is can we control the error enough, either through better measurement or less optimization pressure, such that we can get enough signal to be happy with the output.
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2022-05-15T20:41:34.014Z · LW(p) · GW(p)
Hmm, I'm not sure I understand -- it doesn't seem to me like noisy observations ought to pose a big problem to control systems in general.
For example, suppose we want to minimize the number of mosquitos in the U.S., and we access to noisy estimates of mosquito counts in each county. This may result in us allocating resources slightly inefficiently (e.g. overspending resources on counties that have fewer mosquitos than we think), but we'll still always be doing the approximately correct thing and mosquito counts will go down. In particular, I don't see a sense in which the error "comes to dominate" the thing we're optimizing.
One concern which does make sense to me (and I'm not sure if I'm steelmanning your point or just saying something completely different) is that under extreme optimization pressure, measurements might become decoupled from the thing they're supposed to measure. In the mosquito example, this would look like us bribing the surveyors to report artificially low mosquito counts instead of actually trying to affect real-world mosquito counts.
If this is your primary concern regarding Goodhart's Law, then I agree the model above doesn't obviously capture it. I guess it's more precisely a model of proxy misspecification.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2022-05-15T23:51:02.353Z · LW(p) · GW(p)
"Error" here is all sources of error, not just error in the measurement equipment. So bribing surveyors is a kind of error in my model.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2022-05-17T17:50:57.365Z · LW(p) · GW(p)
Can you explain where there is an error term in AlphaGo or where an error term might appear in hypothetical model similar to AlphaGo trained much longer with much more numerous parameters and computational resources?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2022-05-18T00:14:51.593Z · LW(p) · GW(p)
AlphaGo is fairly constrained in what it's designed to optimize for, but it still has the standard failure mode of "things we forgot to encode". So for example AlphaGo could suffer the error of instrumental power grabbing in order to be able to get better at winning Go because we misspecified what we asked it to measure. This is a kind of failure introduced into the systems by humans failing to make adequately evaluate as we intended, since we cared about winning Go games while also minimizing side effects, but maybe when we constructed we forgot about minimizing side effects.
↑ comment by RHollerith (rhollerith_dot_com) · 2022-05-13T23:55:06.994Z · LW(p) · GW(p)
At least one person here disagrees with you on Goodharting. (I do.)
You've written before on this site if I recall correctly that Eliezer's 2004 CEV proposal is unworkable because of Goodharting. I am granting myself the luxury of not bothering to look up your previous statement because you can contradict me if my recollection is incorrect.
I believe that the CEV proposal is probably achievable by humans if those humans had enough time and enough resources (money, talent, protection from meddling) and that if it is not achievable, it is because of reasons other than Goodhart's law.
(Sadly, an unaligned superintelligence is much easier for humans living in 2022 to create than a CEV-aligned superintelligence is, so we are probably all going to die IMHO.)
Perhaps before discussing the CEV proposal we should discuss a simpler question, namely, whether you believe that Goodharting inevitably ruins the plans of any group setting out intentionally to create a superintelligent paperclip maximizer.
Another simple goal we might discuss is a superintelligence (SI) whose goal is to shove as much matter as possible into a black hole or an SI that "shuts itself off" within 3 months of its launching where "shuts itself off" means stops trying to survive or to affect reality in any way.
↑ comment by RHollerith (rhollerith_dot_com) · 2022-06-25T20:21:22.416Z · LW(p) · GW(p)
The reason Eliezer's 2004 "coherent extrapolated volition" (CEV) proposal is immune to Goodharting is probably because being immune to it was probably one of the main criteria for its creation. I.e., Eliezer came up with it through a process of looking for a design immune to Goodharting. It may very well be that all other published proposals for aligning super-intelligent AI are vulnerable to Goodharting.
Goodhart's law basically says that if we put too much optimization pressure on criterion X, then as a side effect, the optimization process drives criteria Y and Z, which we also care about, higher or lower than we consider reasonable. But that doesn't apply when criterion X is "everything we value" or "the reflective equilibrium of everything we value".
The problem of course being that although the CEV plan is probably within human capabilities to implement (and IMHO Scott Garrabrant's work is probably a step forward) unaligned AI is probably significantly easier to implement, so will likely arrive first.
comment by Gordon Seidoh Worley (gworley) · 2020-06-20T15:20:56.918Z · LW(p) · GW(p)
People often talk of unconditional love, but they implicitly mean unconditional love for or towards someone or something, like a child, parent, or spouse. But this kind of love is by definition conditional because it is love conditioned on the target being identified as a particular thing within the lover's ontology.
True unconditional love is without condition, and it cannot be directed because to direct is to condition and choose. Unconditional love is love of all, of everything and all of reality even when not understood as a thing.
Such love is rare, so it seems worth pursuing the arduous cultivation of it.
Replies from: Dagon, Raemon↑ comment by Dagon · 2020-06-20T17:58:16.445Z · LW(p) · GW(p)
"love" is poorly-defined enough that it always depends on context. Often, "unconditional love" _is_ expected to be conditional on identity, and really should be called "precommitment against abandonment" or "unconditional support". But neither of those signal the strength of the intent and safety conferred by the relationship very well.
I _really_ like your expansion into non-identity, though. Love for the real state of the universe, and the simultaneous desire to pick better futures and acceptance of whichever future actually obtains is a mindset I strive for.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-06-20T19:29:31.083Z · LW(p) · GW(p)
Love for the real state of the universe, and the simultaneous desire to pick better futures and acceptance of whichever future actually obtains
This is the hidden half of what got me thinking about this: my growing being with the world as it is rather than as I understand it.
comment by Gordon Seidoh Worley (gworley) · 2019-09-07T23:24:45.818Z · LW(p) · GW(p)
I think it's safe to say that many LW readers don't feel like spirituality [LW · GW]is a big part of their life, yet many (probably most) people do experience a thing that goes by many names---the inner light, Buddha-nature, shunyata, God [LW · GW]---and falls under the heading of "spirituality". If you're not sure what I'm talking about, I'm pointing to a common human experience you aren't having.
Only, I don't think you're not having it, you just don't realize you are having those experiences.
One way some people get in touch with this thing, which I like to think of as "the source" and "naturalness" and might describe as the silently illuminated wellspring, is with drugs, especially psychedelics but really any drug that gets you to either reduce activity of the default-mode network or at least notice it's operation and stop identifying with it (dissociatives may function like this). In this light, I think of drug users as very spiritual people, only they are unfortunately doing it in a way that is often destructive to their bodies and causes headlessness (causes them to fail to perceive reality accurately and so may act out of confusion and ignorance, leading to greater suffering).
Another way some people manage to get in touch with the source is through exercise. They exercise hard enough that their body gives up devoting enough energy to the brain that the default-mode network shuts down and then they get a "high".
Another way I think many people touch the source is through nostalgia. My theory is that it works like this: when we are young the conceptualizing mind is weak and we see reality as it is more clearly even if it's in a way that is very ignorant of causality; then we get older and understand causality better via stronger models and more conceptualization and discernment but at the cost of less seeing reality directly and more seeing it through our maps; nostalgia is then a feeling of longing to get back to the source, to get back to seeing reality directly the way we did when we were younger.
There are many other ways people get in touch with ultimate reality to gain gnosis of it. The ones I've just described are all "inferior" in the sense that they partially get back to the source in a fragmented way that lets you get only a piece of it. There are "superior" methods, though, maybe the purest (in the sense of having the least extra stuff and the most clear, direct access) of which I consider to be meditation. But however you do it, the human experience of spirituality is all around you and always available, only we've failed to notice the many ways we get at it in the modern, secular world by denying the spirituality of these experiences.
Replies from: Duncan_Sabien, jimrandomh, Viliam, Xenotech↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-11T01:21:09.900Z · LW(p) · GW(p)
Only, I don't think you're not having it, you just don't realize you are having those experiences.
The mentality that lies behind a statement like that seems to me to be pretty dangerous. This is isomorphic to "I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest."
Sometimes that's *true.* Let's not forget that. Sometimes you *are* the most perceptive one in the room.
But I think it's a good and common standard to be skeptical of (and even hostile toward) such claims (because such claims routinely lead to unjustified and not-backed-by-reality dismissal and belittlement and marginalization of the "blind" by the "seer"), unless they come along with concrete justification:
- Here are the observations that led me to claim that all people do in fact experience X, in direct contradiction of individuals claiming otherwise; here's why I think I'm correct to ignore/erase those people's experience.
- Here are my causal explanations of why and how people would become blindspotted on X, so that it's not just a blanket assertion and so that people can attempt to falsify my model.
- Here are my cruxes surrounding X; here's what would cause me to update that I was incorrect in the conclusions I was reaching about what's going on in other people's heads
... etc.
https://slatestarcodex.com/2017/10/02/different-worlds/
Replies from: Benito, Raemon, gworley↑ comment by Ben Pace (Benito) · 2019-09-11T01:42:23.383Z · LW(p) · GW(p)
Yeah, I think there's a subtle distinction. While it's often correct to believe things that you have a hard time communicating explicitly (e.g. most of my actual world model at any given time), the claim that there's something definitely true but that in-principle I can't persuade you of and also can't explain to you, especially when used by a group of people to coordinate around resources, is often functioning as a coordination flag and not as a description of reality.
↑ comment by Raemon · 2019-09-11T21:31:30.069Z · LW(p) · GW(p)
Just wanted to note that I am thinking about this exchange, hope to chime in at some point. I'm not sure whether I'm on the same page as Ben about it. May take a couple days to have time to respond in full.
Replies from: Raemon↑ comment by Raemon · 2019-09-13T03:26:15.697Z · LW(p) · GW(p)
Just a quick update: the mod team just chatted a bunch about this thread. There’s a few different things going on.
It’ll probably be another day before a mod follows up here.
↑ comment by Ben Pace (Benito) · 2019-09-16T23:23:16.063Z · LW(p) · GW(p)
[Mod note] I thought for a while about how shortform interacts with moderation here. When Ray initially wrote the shortform announcement post [LW · GW], he described the features, goals, and advice for using it, but didn’t mention moderation. Let me follow-up by saying: You’re welcome and encouraged to enforce whatever moderation guidelines you choose to set on shortform, using tools like comment removal, user bans, and such. As a reminder, see the FAQ section on moderation [? · GW] for instructions on how to use the mod tools. Do whatever you want to help you think your thoughts here in shortform and feel comfortable doing so.
Some background thoughts on this: In other places on the internet, being blocked locks you out of the communal conversation, but there are two factors that make it pretty different here. Firstly, banning someone from a post on LW means they can’t reply to the content they’re banned from, but it doesn’t hide your content from them or their content from you. And secondly, everyone here on LessWrong has a common frontpage where the main conversation happens - the shortform is a low-key place and a relatively unimportant part of the conversation. (You can be banned from posts on frontpage, but that action requires meeting high standards not required for shortform bans.) Relatively speaking, shortform is very low-key, and I expect the median post gets 3x-10x fewer views than the median frontpage post. It’s a place for more casual conversation, hopefully leading to the best ideas getting made into posts - indeed we’re working on adding an option to turn shortform posts into blogposts. This is why we never frontpage a user’s shortform feed - they rarely meet frontpage standards, and they’re not supposed to.
Just to mention this thread in particular, Gordon is well within his rights to ban users or remove their comments from his shortform posts if he wishes to, and the LW mod team will back him up when he wants to do that.
↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-11T02:31:31.670Z · LW(p) · GW(p)
Sure, this is short form. I'm not trying very hard to make a complete argument to defend my thoughts, just putting them out there. There is no norm that I need always abide everywhere to present the best (for some notion of best) version of my reasons for things I claim, least of all, I think, in this space as opposed to, say, in a frontpage post. Thus it feels to me a bit out of place to object in this way here, sort of like objecting that my fridge poetry is not very good or my shower singing is off key.
Now, your point is well taken, but I also generally choose to simply not be willing to cross more than a small amount of inferential distance in my writing (mostly because I think slowly and it requires significant time and effort for me to chain back far enough to be clear to successively wider audiences), since I often think of it as leaving breadcrumbs for those who might be nearby rather than leading people a long way towards a conclusion. I trust people to think things through for themselves and agree with me or not as their reason dictates.
Yes, this means I am often quite distanced from easily verifying the most complex models I have, but such seems to be the nature of complex models that I don't even have complete in my own mind yet, much less complete in a way that I would lay them out precisely such that they could be precisely verified point by point. This perhaps makes me frustratingly inscrutable about my most exciting claims to those with the least similar priors, but I view it as a tradeoff for aiming to better explain more of the world to myself and those much like me at the expense of failing to make those models legible enough for those insufficiently similar to me to verify them.
Maybe my circumstances will change enough that one day I'll make a much different tradeoff?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-11T04:01:38.675Z · LW(p) · GW(p)
This response missed my crux.
What I'm objecting to isn't the shortform, but the fundamental presumptuousness inherent in declaring that you know better than everyone else what they're experiencing, *particularly* in the context of spirituality, where you self-describe as more advanced than most people.
To take a group of people (LWers) who largely say "nah, that stuff you're on is sketchy and fake" and say "aha, actually, I secretly know that you're in my domain of expertise and don't even know it!" is a recipe for all sorts of bad stuff. Like, "not only am I *not* on some sketchy fake stuff, I'm actually superior to my naysayers by very virtue of the fact that they don't recognize what I'm pointing at! Their very objection is evidence that I see more clearly than they do!"
I'm pouring a lot into your words, but the point isn't that your words carried all that so much as that they COULD carry all that, in a motte-and-bailey sort of way. The way you're saying stuff opens the door to abuse, both social and epistemic. My objection wasn't actually a call for you to give more explanation. It was me saying "cut it out," while at the same time acknowledging that one COULD, in principle, make the same claim in a justified fashion, if they cared to.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-11T06:25:22.932Z · LW(p) · GW(p)
Note: what follows responds literally to what you said. I'm suspicious enough that my interpretation is correct that I'll respond based on it, but I'm open to the possibility this was meant more metaphorically and I've misunderstood your intention.
It was me saying "cut it out,"
Ah, but that's not up to you, at least not here. You are welcome to dislike what I say, claim or argue that I am dangerous in some way, downvote me, flag my posts, etc. BUT it's not up to you to enforce a norm here to the best of my knowledge, even if it's what you would like to do.
Sorry if that is uncharacteristically harsh and direct of me, but if that was your motivation, I think it important to say I don't recognize you as having the authority to do that in this space, consider it a violation of my commenting guidelines, and will delete future comments that attempt to do the same.
Replies from: Benito, Duncan_Sabien↑ comment by Ben Pace (Benito) · 2019-09-11T08:35:46.597Z · LW(p) · GW(p)
Hey Gordon, let me see if I understand your model of this thread. I’ll write mine and can you tell me if it matches your understanding?
- You write a post giving your rough understanding of a commonly discussed topic that many are confused by
- Duncan objects to a framing sentence that he claims means “I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest." because it seems inappropriate and dangerous in this domain (spirituality)
- You say “Dude, I’m just getting some quick thoughts off my chest, and it’s hard to explain everything”
- Duncan says you aren’t responding to him properly - he does not believe this is a disagreement but a norm-violation
- You say that Duncan is not welcome to prosecute norm violations on your wall unless they are norms that you support
↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-11T15:19:12.165Z · LW(p) · GW(p)
Yes, that matches my own reading of how the interaction progressed, caveat any misunderstanding I have of Duncan's intent.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-09-11T17:33:48.345Z · LW(p) · GW(p)
nods Then I suppose I feel confused by your final response.
If I imagine writing a shortform post and someone said it was:
- Very rude to another member of the community
- Endorsing a study that failed to replicate
- Lied about an experience of mine
- Tried to unfairly change a narrative so that I was given more status
I would often be like “No, you’re wrong” or maybe “I actually stand by it and intended to be rude” or “Thanks, that’s fair, I’ll edit”. I can also imagine times where the commenter is needlessly aggressive and uncooperative where I’d just strong downvote and ignore.
But I’m confused by saying “you’re not allowed to tell me off for norm-violations on my shortform”. To apply that principle more concretely, it could say “you’re not allowed to tell me off for lying on my shortform”.
My actual model of you feels a bit confused by Duncan’s claim or something, and wants to fight back against being attacked for something you don’t see as problematic. Like, it feels presumptuous of Duncan to walk into your post and hold you to what feels mostly like high standards of explanation, and you want to (rightly) say that he’s not allowed to do that.
Does that all seem right?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-11T21:59:03.998Z · LW(p) · GW(p)
Yes. To add to this what I'm most strongly reacting to is not what he says he's doing explicitly, which I'm fine with, but what further conversation suggests he is trying to do: to act as norm enforcer rather than as norm enforcement recommender.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-11T22:05:19.224Z · LW(p) · GW(p)
I explicitly reject Gordon's assertions about my intentions as false, and ask (ASK, not demand) that he justify (i.e. offer cruxes) or withdraw them.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-11T23:49:31.065Z · LW(p) · GW(p)
I cannot adequately do that here because it relies on information you conveyed to me in a non-public conversation.
I accept that you say that's not what you're doing, and I am happy to concede that your internal experience of yourself as you experience it tells you that you are doing what you are doing, but I now believe that my explanation better describes why you are doing what you are doing than the explanation you are able to generate to explain your own actions.
The best I can maybe offer is that I believe you have said things that are better explained by an intent to enforce norms rather than argue for norms and imply that general case should be applied in this specific case. I would say the main lines of evidence revolve around how I interpret your turns of phrase, how I read your tone (confrontational and defensive), what aspects of things I have said you have chosen to respond to, how you have directed the conversation, and my general model of human psychology with the specifics you are giving me filled in.
Certainly I may be mistaken in this case and I am reasoning off circumstantial evidence which is not a great situation to be in, but you have pushed me hard enough here and elsewhere that it has made me feel it is necessary to act to serve the purpose of supporting the conversation norms I prefer in the places you have engaged me. I would actually really like this conversation to end because it is not serving anything I value, other than that I believe not responding would simply allow what I dislike to continue and be subtly accepted, and I am somewhat enjoying the opportunity to engage in ways I don't normally so I can benefit from the new experience.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-12T00:21:46.179Z · LW(p) · GW(p)
I note for the record that the above is strong evidence that Gordon was not just throwing an offhand turn of phrase in his original post; he does and will regularly decide that he knows better than other people what's going on in those other people's heads. The thing I was worried about, and attempting to shine a light on, was not in my imagination; it's a move that Gordon endorses, on reflection, and it's the sort of thing that, historically, made the broader culture take forever to recognize e.g. the existence of people without visual imagery, or the existence of episodics, or the existence of bisexuals, or any number of other human experiences that are marginalized by confident projection.
I'm comfortable with just leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move." Personally, I find it unjustifiable and morally abhorrent. Gordon clearly does not. Maybe that's the crux.
Replies from: Vladimir_Nesov, Zack_M_Davis, Duncan_Sabien↑ comment by Vladimir_Nesov · 2019-09-12T01:30:38.792Z · LW(p) · GW(p)
[He] does and will regularly decide that he knows better than other people what's going on in those other people's heads. [...] Personally, I find it unjustifiable and morally abhorrent.
How can it be morally abhorrent? It's an epistemic issue. Factual errors often lead to bad consequences, but that doesn't make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what's going on. And what someone thinks is a factual question, not a question of assigning value to an event.
Replies from: Wei_Dai, Duncan_Sabien↑ comment by Wei Dai (Wei_Dai) · 2019-09-12T02:03:07.312Z · LW(p) · GW(p)
Things that are morally abhorrent are not necessarily moral errors. For example I can find wildlife suffering morally abhorrent but there's obviously no moral errors or any kind of errors being committed there. Given that the dictionary defines abhorrent as "inspiring disgust and loathing; repugnant" I think "I find X morally abhorrent" just means "my moral system considers X to be very wrong or to have very low value."
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-09-12T02:38:31.260Z · LW(p) · GW(p)
That's one way for my comment to be wrong, as in "Systematic recurrence of preventable epistemic errors is morally abhorrent."
When I was writing the comment, I was thinking of another way it's wrong: given morality vs. axiology distinction, and distinction between belief and disclosure of that belief, it might well be the case that it's a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it's a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn't be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-12T08:34:55.065Z · LW(p) · GW(p)
I find it morally abhorrent because, when not justified and made-cruxy (i.e. when done the only way I've ever seen Gordon do it), it's tantamount to trying to erase another person/another person's experience, and (as noted in my first objection) it often leads, in practice, to socially manipulative dismissiveness and marginalization that's not backed by reality.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-09-12T14:27:18.916Z · LW(p) · GW(p)
So it's a moral principle under the belief vs. declaration distinction (as in this comment [LW(p) · GW(p)]). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).
Personally I don't think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn't cause as much collateral damage.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-13T05:44:19.954Z · LW(p) · GW(p)
I'm not sure I'm exactly responding to what you want me to respond to, but:
It seems to me that a declaration like "I think this is true of other people in spite of their claims to the contrary; I'm not even sure if I could justify why? But for right now, that's just the state of what's in my head"
is not objectionable/doesn't trigger the alarm I was trying to raise. Because even though it fails to offer cruxes or detail, it at least signals that it's not A STATEMENT ABOUT THE TRUE STATE OF THE UNIVERSE, or something? Like, it's self-aware about being a belief that may or may not match reality?
Which makes me re-evaluate my response to Gordon's OP and admit that I could have probably offered the word "think" something like 20% more charity, on the same grounds, though on net I still am glad that I spelled out the objection in public (like, the objection now seems to me to apply a little less, but not all the way down to "oops, the objection was fundamentally inappropriate").
Replies from: Vladimir_Nesov, gworley↑ comment by Vladimir_Nesov · 2019-09-13T16:48:51.968Z · LW(p) · GW(p)
(By "belief" I meant a belief that talkes place in someone's head, and its existence is not necessarily communicated to anyone else. So an uttered statement "I think X" is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person's mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.)
I think a salient distinction between declarations of "I think X" and "it's true that X" is a bad thing, as described in this comment [LW(p) · GW(p)]. The distinction is that in the former case you might lack arguments for the belief. But if you don't endorse the belief, it's no longer a belief, and "I think X" is a bug in the mind that shouldn't be called "belief". If you do endorse it, then "I think X" does mean "X". It is plausibly a true statement about the state of the universe, you just don't know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation.
So the statement "I think this is true of other people in spite of their claims to the contrary" should mean approximately the same as "This is true of other people in spite of their claims to the contrary", and a meaningful distinction only appears with actual arguments about those statements, not with different placement of "I think".
↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-13T22:43:17.859Z · LW(p) · GW(p)
I forget if we've talked about this specifically before, but I rarely couch things in ways that make clear I'm talking about what I think rather than what is "true" unless I am pretty uncertain and want to make that really clear or expect my audience to be hostile or primarily made up of essentialists. This is the result of having an epistemology where there is no direct access to reality so I literally cannot say anything that is not a statement about my beliefs about reality, so saying "I think" or "I believe" all the time is redundant because I don't consider eternal notions of truth meaningful (even mathematical truth, because that truth is contingent on something like the meta-meta-physics of the world and my knowledge of it is still mediated by perception, cf. certain aspects of Tegmark).
I think of "truth" as more like "correct subjective predictions, as measured against (again, subjective) observation", so when I make claims about reality I'm always making what I think of as claims about my perception of reality since I can say nothing else and don't worry about appearing to make claims to eternal, essential truth since I so strongly believe such a thing doesn't exist that I need to be actively reminded that most of humanity thinks otherwise to some extent. Sort of like going so hard in one direction that it looks like I've gone in the other because I've carved out everything that would have allowed someone to observe me having to navigate between what appear to others to be two different epistemic states where I only have one of them.
This is perhaps a failure of communication, and I think I speak in ways in person that make this much clearer and then I neglect the aspects of tone not adequately carried in text alone (though others can be the judge of that, but I basically never get into discussions about this concern in person, even if I do get into meta discussions about other aspects of epistemology). FWIW, I think Eliezer has (or at least had) a similar norm, though to be fair it got him into a lot of hot water too, so maybe I shouldn't follow his example here!
↑ comment by Zack_M_Davis · 2019-09-12T02:13:09.913Z · LW(p) · GW(p)
leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move."
Nesov scooped me [LW(p) · GW(p)] on the obvious objection, but as long as we're creating common knowledge [LW · GW], can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what's going on in those other people's heads when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).
the existence of bisexuals
As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of "Don't decide that you know other people's minds better than they do" performs better or worse than other inference procedures.
J. Michael Bailey, "What Is Sexual Orientation and Do Women Have One?", section titled "Sexual Arousal Patterns vs. the Kinsey Scale: The Case of Male Bisexuality" ↩︎
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-12T08:32:46.114Z · LW(p) · GW(p)
when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).
Yes, as clearly noted in my original objection, there is absolutely a time and a place for this, and a way to do it right; I too share this tool when able and willing to justify it. It's only suspicious when people throw it out solely on the strength of their own dubious authority. My whole objection is that Gordon wasn't bothering to (I believe as a cover for not being able to).
Replies from: Zack_M_Davis, Vladimir_Nesov↑ comment by Zack_M_Davis · 2019-09-12T15:06:30.513Z · LW(p) · GW(p)
as clearly noted in my original objection
Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters [LW · GW], and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)
there is absolutely a time and a place for this
Exactly—and lesswrong.com is the place for people to report on their models of reality, which includes their models of other people's minds as a special case.
Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness [LW(p) · GW(p)]! But in my rationalist culture, while standing in the Citadel of Truth [LW(p) · GW(p)], we're not allowed to care whether a map is marginalizing or dismissive; we're only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the "rationalist" brand name, then my culture is at war with them.)
My whole objection is that Gordon wasn't bothering to
Great! Thank you for critcizing people who don't justify their beliefs with adequate evidence and arguments. That's really useful for everyone reading!
(I believe as a cover for not being able to).
In context, it seems worth noting that this is a claim about Gordon's mind, and your only evidence for it is absence-of-evidence (you think that if he had more justification, he would be better at showing it). I have no problem with this (as we know, absence of evidence is evidence of absence [LW · GW]), but it seems in tension with some of your other claims?
Replies from: Vladimir_Nesov, Duncan_Sabien↑ comment by Vladimir_Nesov · 2019-09-12T15:55:52.237Z · LW(p) · GW(p)
criticizing people who don't justify their beliefs with adequate evidence and arguments
I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It's concise, and if handled carefully, it can be sufficient for communication. As evidence, it's a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn't yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that's only a small part of how human intelligence builds its map.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-13T05:47:53.251Z · LW(p) · GW(p)
Yeah, if I had the comment to rewrite (I prefer not to edit it at this point) I would say "My whole objection is that Gordon wasn't bothering to (and at this point in the exchange I have a hypothesis that it's reflective of not being able to, though that hypothesis comes from gut-level systems and is wrong-until-proven-right as opposed to, like, a confident prior)."
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-13T22:59:14.192Z · LW(p) · GW(p)
So, having a little more space from all this now, I'll say that I'm hesitant to try to provide justifications because certain parts of the argument require explaining complex internal models of human minds that are a level more complex than I can explain even though I'm using them (I only seem to be able to interpret myself coherently one level of organization less than the maximum level of organization present in my mind) and because other parts of the argument require gnosis of certain insights that I (and to the best of my knowledge, no one) knows how to readily convey without hundreds to thousands of hours of meditation and one-on-one interactions (though I do know a few people who continue to hope that they may yet discover a way to make that kind of thing scalable even though we haven't figured it out in 2500 years, maybe because we were missing something important to let us do it).
So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to. I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). So given that, I can see why from your point of view it looks like I'm just making stuff up or worse since I can't offer "justified belief" that you'd accept as "justified", and I'm not really much interested in this particular case in changing your mind as I don't yet completely know myself how to generate that change in stance towards epistemology in others even though I encountered evidence that lead me to that conclusion myself.
Replies from: Vaniver↑ comment by Vaniver · 2019-09-14T00:15:34.972Z · LW(p) · GW(p)
There's a dynamic here that I think is somewhat important: socially recognized gnosis.
That is, contemporary American society views doctors as knowing things that laypeople don't know, and views physicists as knowing things that laypeople don't know, and so on. Suppose a doctor examines a person and says "ah, they have condition X," and Amy responds with "why do you say that?", and the doctor responds with "sorry, I don't think I can generate a short enough explanation that is understandable to you." It seems like the doctor's response to Amy is 'socially justified', in that the doctor won't really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There's an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by 'immediate public justification' or something similar.
But then there's a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their astronomy and have their unintelligibility be respected, while we don't want to respect the unintelligibility of astrologers.
So far I've been talking 'nationally' or 'globally' but I think a similar question holds locally. Do we want it to be the case that 'rationalists as a whole' think that meditators have gnosis and that this is respectable, or do we want 'rationalists as a whole' to think that any such respect is provisional or 'at individual discretion' or a mistake?
That is, when you say:
I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan).
I feel hopeful that we can settle whether or not this is a problem (or at least achieve much more mutual understanding and clarity).
So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to.
This feels like the more important part ("if you don't have episteme, why do you believe it?") but I think there's a nearly-as-important other half, which is something like "presenting as having respected gnosis" vs. "presenting as having unrespected gnosis." If you're like "as a doctor, it is my considered medical opinion that everyone has spirituality", that's very different from "look, I can't justify this and so you should take it with a grain of salt, but I think everyone secretly has spirituality". I don't think you're at the first extreme, but I think Duncan is reacting to signals along that dimension.
↑ comment by Vladimir_Nesov · 2019-09-12T14:33:59.451Z · LW(p) · GW(p)
there is absolutely a time and a place for this
That's not the point! Zack is talking about beliefs, not their declaration, so it's (hopefully) not the case that there is "a time and a place" for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of "justify" and "belief").
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-12T00:42:13.684Z · LW(p) · GW(p)
Oh, one last footnote: at no point did I consider the other conversation private, at no point did I request that it be kept private, and at no point did Gordon ask if he could reference it (to which I would have said "of course you can"). i.e. it's not out of respect for my preferences that that information is not being brought in this thread.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-12T03:59:38.816Z · LW(p) · GW(p)
Correct, it was made in a nonpublic but not private conversation, so you are not the only agent to consider, though admittedly the primary one other than myself in this context. I'm not opposed to discussing disclosure, but I'm also happy to let the matter drop at this point since I feel I have adequately pushed back against the behavior I did not want to implicitly endorse via silence since that was my primary purpose in continuing these threads past the initial reply to your comment.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-11T07:12:50.208Z · LW(p) · GW(p)
There's a world of difference between someone saying "[I think it would be better if you] cut it out because I said so" and someone saying "[I think it would be better if you] cut it out because what you're doing is bad for reasons X, Y, and Z." I didn't bother to spell out that context because it was plainly evident in the posts prior. Clearly I don't have any authority beyond the ability to speak; to
claim or argue that I am dangerous in some way
IS what I was doing, and all I was doing.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-11T15:24:16.300Z · LW(p) · GW(p)
I mostly disagree that better reasons matter in a relevant way here, especially since I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced but instead a bid to enforce that norm. To me what's relevant is intended effect.
Replies from: elityre, Duncan_Sabien↑ comment by Eli Tyre (elityre) · 2019-09-12T09:10:13.477Z · LW(p) · GW(p)
What's the difference?
Suppose I'm talking with a group of loose acquaintances, and one of them says (in full seriousness), "I'm not homophobic. It's not that I'm afraid of gays, I just think that they shouldn't exist."
It seem to me that it is appropriate for me to say, "Hey man, that's not ok to say." It might be that a number of other people in the conversation would back me up (or it might be that they they defend the first guy), but there wasn't common knowledge of that fact beforehand.
In some sense, this is a bid to establish a new norm, by pushing a the private opinions of a number of people into common knowledge. It also seems to me to be a virtuous thing to do in many situations.
(Noting that my response to the guy is not: "Hey, you can't do that, because I get to decide what people do around here." It's "You can't do that, because it's bad" and depending on the group to respond to that claim in one way or another.)
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-11T15:28:38.229Z · LW(p) · GW(p)
"Here are some things you're welcome to do, except if you do them I will label them as something else and disagree with them."
Your claim that you had tentative conclusions that you were willing to update away from is starting to seem like lip service.
I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced
Literally my first response to you centers around the phrase "I think it's a good and common standard to be skeptical of (and even hostile toward) such claims." That's me saying "I think there's a norm here that it's good to follow," along with detail and nuance à la here's when it's good not to follow it.
Replies from: gworley
↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-11T16:22:34.594Z · LW(p) · GW(p)
This is a question of inferred intent, not what you literally said. I am generally hesitant to take much moderation action based on what I infer, but you have given me additional reason to believe my interpretation is correct in a nonpublic thread on Facebook.
(If admins feel this means I should use a reign of terror moderation policy I can switch to that.)
Regardless, I consider this a warning of my local moderation policy only and don't plan to take action on this particular thread.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-09-11T18:48:17.859Z · LW(p) · GW(p)
Er, I generally have FB blocked, but I have now just seen the thread on FB that Duncan made about you, and that does change how I read the dialogue (it makes Duncan’s comments feel more like they’re motivated by social coordination around you rather than around meditation/spirituality, which I’d previously assumed).
(Just as an aside, I think it would’ve been clearer to me if you’d said “I feel like you’re trying to attack me personally for some reason and so it feels especially difficult to engage in good faith with this particular public accusation of norm-violation” or something like that.)
I may make some small edit to my last comment up-thread a little after taking this into account, though I am still curious about your answer to the question as I initially stated it.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-09-11T21:07:21.585Z · LW(p) · GW(p)
I can have different agendas and follow different norms on different platforms. Just saying. If I were trying to do the exact same thing in this thread as I am in the FB thread, they would have the same words, instead of different words.
(The original objection *does* contain the same words, but Gordon took the conversation in meaningfully different directions on the two different platforms.)
I note that above, Gordon is engaging in *exactly* the same behavior that I was trying to shine a spotlight on (claiming to understand my intent better than I do myself/holding to his model that I intend X despite my direct claims to the contrary).
↑ comment by jimrandomh · 2019-09-12T00:20:15.389Z · LW(p) · GW(p)
Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.
(Some of the meta-level fighting seemed not-fine, but that's for another comment.)
↑ comment by Viliam · 2019-09-08T21:24:28.566Z · LW(p) · GW(p)
Seems to me that modern life full of distractions. As a smart person, you probably have a work that requires thinking (not just moving your muscles in a repetitive way). In your free time there is internet with all the websites optimized for addictiveness. Plus all the other things you want to do (books to read, movies to see, friends to visit). Electricity can turn your late night into a day; you can take a book or a smartphone everywhere.
So, unless we choose it consciously, there are no silent moments, to get in contact with yourself... or whatever higher power you imagine there to be, talking to you.
I wonder what is the effect ratio between meditation and simply taking a break and wondering about stuff. Maybe it's our productivity-focused thinking saying that meditating (doing some hard work in order to gain supernatural powers) is a worthy endeavor, while goofing off is a sin.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-09T20:41:49.283Z · LW(p) · GW(p)
"Simply taking a break and wondering about stuff" is a decent way to get in touch with this thing I'm pointing at. The main downside to it is that it's slow, in that for it to produce effects similar to meditation probably requires an order of magnitude more time, and likely won't result in the calmest brain states where you can study your phenomenology clearly.
↑ comment by Xenotech · 2019-09-08T02:07:56.443Z · LW(p) · GW(p)
Are there individuals willing to explicitly engage in comforting discussion regarding these things you've written about? Any willing to extend personal invitations?
I would love to discuss spirituality with otherwise "rational" intelligent people.
Please consider teaching out to me personally - it would be transformative: drawnalong@gmail.com
comment by Gordon Seidoh Worley (gworley) · 2019-08-06T20:10:28.995Z · LW(p) · GW(p)
I have plans to write this up more fully as a longer post explaining the broader ideas with visuals, but I thought I would highlight one that is pretty interesting and try out the new shortform feature at the same time! As such, this is not optimized for readability, has no links, and I don't try to backup my claims. You've been warned!
Suppose you frequently found yourself identifying with and feeling like you were a homunculus controlling your body and mind: there's a real you buried inside, and it's in the driver's seat. Sometimes your mind and body do what "you" want, sometimes it doesn't and this is frustrating. Plenty of folks reify this in slightly different ways: rider and elephant, monkey and machine, prisoner in cave (or audience member in theater), and, to a certain extent, variations on the S1/S2 model. In fact, I would propose this is a kind of dual process theory of mind that has you identifying with one of the processes.
A few claims.
First, this is a kind of constant, low-level dissociation. It's not the kind of high-intensity dissociation we often think of when we use that term, but it's still a separation of sense of self from the physical embodiment of self.
Second, this is projection, and thus a psychological problem in need of resolving. There's nothing good about thinking of yourself this way; it's a confusion that may be temporarily helpful but it's also something you need to learn to move beyond via first reintegrating the separated sense of self and mind/body.
Third, people drawn to the rationalist community are unusually likely to be the sort of folks who dissociate and identify with the homunculus, S2, the rider, far mode, or whatever you want to call it. It gives them a world view that says "ah, yes, I know what's right, but for some reason by stupid brain doesn't do what I want, so let's learn how to make it do what I want" when this is in fact a confusion because it's the very brain that's "stupid" that's producing the feeling that you think you know what you want!
To speculate a bit, this might help explain some of the rationalist/meta-rationalist divide: rationalists are still dissociating, meta-rationalists have already reintegrated, and as a result we care about very different things and look at the world differently because of it. That's very speculative, though, and I have nothing other than weak evidence to back it up.
comment by Gordon Seidoh Worley (gworley) · 2021-04-08T20:39:13.088Z · LW(p) · GW(p)
More surprised than perhaps I should be that people take up tags right away after creating them. I created the IFS [? · GW] tag just a few days ago after noticing it didn't exist but wanted to link it and I added the first ~5 posts that came up if I searched for "internal family systems". It now has quite a few more posts tagged with it that I didn't add. Super cool to see the system working in real time!
comment by Gordon Seidoh Worley (gworley) · 2022-04-05T04:58:47.969Z · LW(p) · GW(p)
One of the fun things about the current Good Heart Token week is that it's giving me cover to try less hard to write posts. I'm writing a bunch, and I have plausible deniability if any of them end up not being that good—I was Goodharting. Don't hate the player, hate the game.
I'm not sure how many of these posts will stand the test of time, but I think there's something valuable about throwing a bunch of stuff at the wall and seeing what sticks. I'm not normally going to invest in that sort of strategy; I just don't have time for it. But for one week it's fun to do it and see what comes out of it, motivated by my desire to stay on the leaderboard and to offset my commitment to the donation lottery.
I think I've already produced a few interesting things, so we'll see what I write in the next couple days!
comment by Gordon Seidoh Worley (gworley) · 2020-02-17T19:17:22.119Z · LW(p) · GW(p)
tl;dr: read multiple things concurrently so you read them "slowly" over multiple days, weeks, months
When I was a kid, it took a long time to read a book. How could it not: I didn't know all the words, my attention span was shorter, I was more restless, I got lost and had to reread more often, I got bored more easily, and I simply read fewer words per minute. One of the effects of this is that when I read a book I got to live with it for weeks or months as I worked through it.
I think reading like that has advantages. By living with a book for longer the ideas it contained had more opportunity to bump up against other things in my life. I had more time to think about what I had read when I wasn't reading. I more deeply drunk in the book as I worked to grok it. And for books I read for fun, I got to spend more time enjoying them, living with the characters and author, by having it spread out over time.
As an adult it's hard to preserve this. I read faster and read more than I did as a kid (I estimate I spend 4 hours a day reading on a typical day (books, blogs, forums, etc.), not including incidental reading in the course of doing other things). Even with my relatively slow reading rate of about 200 wpm, I can polish off ~50k words per day, the length of a short novel.
The trick, I find, is to read slowly by reading multiple things concurrently and reading only a little bit of each every day. For books this is easy: I can just limit myself to a single chapter per day. As long as I have 4 or 5 books I'm working on at once, I can spread out the reading of each to cover about a month. Add in other things like blogs and I can spread things out more.
I think this has additional benefits over just getting to spend more time with the ideas. It lets the ideas in each book come up against each other in ways they might otherwise not. I sometimes notice patterns that I might otherwise not have because things are made simultaneously salient that otherwise would not be. And as a result I think I understand what I read better because I get the chance not just to let it sink in over days but also because I get to let it sink in with other stuff that makes my memory of it richer and more connected.
So my advice, if you're willing to try it, is to read multiple books, blogs, etc. concurrently, only reading a bit of each one each day, and let your reading span weeks and months so you can soak in what you read more deeply rather than letting it burn bright and fast through your mind to be forgotten like a used up candle.
Replies from: Raemon↑ comment by Raemon · 2020-02-17T22:48:19.327Z · LW(p) · GW(p)
Interesting idea, thanks. I think this also hints at other ways to approach this (i.e. maybe rather than interspersing books with other books, you could interspersing them with non-reading-things that still give you some chance to have idea from multiple domains bumping into each other)
comment by Gordon Seidoh Worley (gworley) · 2020-08-21T22:24:37.958Z · LW(p) · GW(p)
Explanations are liftings from one ontology to another.
Replies from: Raemon↑ comment by Raemon · 2020-08-21T22:35:12.791Z · LW(p) · GW(p)
Seems true, although in some cases I feel like one of the ontologies is just an obviously bigger/better version of another one.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-08-24T19:46:51.367Z · LW(p) · GW(p)
This actually fits the lifting metaphor (which is itself a metaphor)!
comment by Gordon Seidoh Worley (gworley) · 2020-04-16T18:52:03.412Z · LW(p) · GW(p)
I get worried about things like this article that showed up on the Partnership on AI blog. Reading it there's nothing I can really object to in the body of post: it's mostly about narrow AI alignment and promotes a positive message of targeting things that benefit society rather than narrowly maximize a simple metric. How it's titled "Aligning AI to Human Values means Picking the Right Metrics" and that implies to me a normative claim that reads in my head something like "to build aligned AI it is necessary and sufficient to pick the right metrics" which is something I think few would agree with. Yet if I was a casual observer just reading the title of this post I might come away with the impression that AI alignment is as easy as just optimizing for something prosocial, not that there are lots of hard problems to be solved to even get AI to do what you want, let alone to pick something beneficial to humanity to do.
To be fair this article has a standard "not necessarily the views of PAI, etc." disclaimer, but then the author is a research fellow at PAI.
This makes me a bit nervous about the effect of PAI on promoting AI safety in industry, especially if it effectively downplays it or makes it seem easier than it is in ways that either encourages or fails to curtail risky behavior in the use of AI in industry.
Replies from: jonathanstray↑ comment by jonathanstray · 2020-04-16T21:12:05.280Z · LW(p) · GW(p)
Hi Gordon. Thanks for reading the post. I agree completely that the right metrics are nowhere near sufficient for aligned AI — further I’d say that “right” and “aligned” have very complex meanings here.
What I am trying to do with this post is shed some light on one key piece of the puzzle, the actual practice of incorporating metrics into real systems. I believe this is necessary, but don’t mean to suggest that this is sufficient or unproblematic. As I wrote in the post, “this sort of social engineering at scale has all the problems of large AI systems, plus all the problems of public policy interventions.”
To me the issue is that large, influential optimizing systems already exist and seem unlikely to be abandoned. There may be good arguments that a particular system should not be used, but it’s hard for me to see an argument to avoid this category of technology as a whole. As I see it, the question is not so much “should we try to choose appropriate metrics?” but “do we care to quantitatively monitor and manage society-scale optimizing systems?” I believe this is an urgent need for this sort of work within industry.
Having said all that, you may be right that the title of this post overpromises. I’d welcome your thoughts here.
comment by Gordon Seidoh Worley (gworley) · 2021-04-22T04:01:00.496Z · LW(p) · GW(p)
Sometimes people at work say to me "wow, you write so clearly; how do you do it?" and I think "given the nonsense I'm normally trying to explain on LW, it's hardly a surprise I've developed the skill well enough that when it's something as 'simple' as explaining how to respond to a page or planning a technical project that I can write clearly; you should come see what it looks like when I'm struggling at the edge of what I understand!".
comment by Gordon Seidoh Worley (gworley) · 2022-12-20T05:27:19.099Z · LW(p) · GW(p)
Small boring, personal update:
I've decided to update my name here and various places online.
I started going by "G Gordon Worley III" when I wrote my first academic paper and discovered I there would be significant name collision if I just went by "Gordon Worley". Since "G Gordon Worley III" is, in fact, one version of my full legal name that is, as best as I can tell, globally unique, it seemed a reasonable choice.
A couple years ago I took Zen precepts and received a Dharma name: "Sincere Way." In the Sino-Japanese used for Dharma names, "誠道", or "Seidoh" when written in Romaji.
(It should actually be "Seidou" or "Seidо̄" by the standard rules, but in the former case no one will say my name correctly if I spell it like that and in the latter typing a macron is a pain in the ass on most English keyboards. I debated spelling it "Saydoh" since rhyming with "Playdoh" gives the closest English approximate, but that's too nonstandard for me to live with.)
I'm not sure what changed recently, but I've decided to keep my name unique but in a new way by switching to using "Seidoh" as if it were my middle name. It'll probably be a while before I propagate the update everywhere, but that's the plan.
I still expect most everyone to call me Gordon, just as they did before. This is just a new way of writing my name when I want a unique identifier.
But I don't mean to underplay this change.
I was born Gordon. I've become Seidoh. The time has come to honor both.
comment by Gordon Seidoh Worley (gworley) · 2022-03-29T15:36:50.614Z · LW(p) · GW(p)
It seems like humans need an outgroup.
My evidence is not super strong, but I notice a few things:
- There's less political tension and infighting when there's a clear enemy. Think about wartime.
- There's a whole political theory about creating ingroup cohesion based on defining the ingroup against the outgroup. This is how a number of nation-states and religions were congealed.
- Lots of political infighting has ramped up over the last 30+ years. This period has also been a long period of peace with no threat of major power wars. Theory: people constructed an outgroup.
My theory is roughly that humans need an ingroup for a variety of reasons not detailed here, there's no ingroup without an outgroup, thus they need an outgroup to define the ingroup. If no natural outgroup exists they'll create it.
Replies from: Dagon, ChristianKl, yitz↑ comment by Dagon · 2022-03-29T16:23:22.977Z · LW(p) · GW(p)
This is the basic intuition behind "war on X" framing of political topics. Making Drugs, or Cancer, or whatever the "outgroup" triggers that sense of us-vs-them. But it doesn't work that well, because human brains are more complicated than that, and are highly tuned to the mix of competition and cooperation with other humans, not non-agentic things.
One of the first things people do in their conception of members of outgroups is to forget or deny their humanity. This step fails for things that already aren't human, and I suspect will derail that path to cohesion.
↑ comment by Viliam · 2022-03-31T11:31:08.917Z · LW(p) · GW(p)
One of the first things people do in their conception of members of outgroups is to forget or deny their humanity. This step fails for things that already aren't human, and I suspect will derail that path to cohesion.
Humans are so fucked up.
"We need an enemy that we can believe is inhuman, so we can unite to fight it."
"Okay, what about Death? That's a logical choice considering that it is already trying to kill you..."
"Nah, too inhuman."
↑ comment by ChristianKl · 2022-03-29T16:57:39.451Z · LW(p) · GW(p)
War framing leads to centralization of power. It allows those on the top to weaken their political enemies and that in turn results in less open conflicts.
This has advantages but also comes with it's problem as dissenting perspectives about how to address problems get pushed out.
↑ comment by Yitz (yitz) · 2022-03-29T16:29:40.149Z · LW(p) · GW(p)
This is why I strongly believe a Hollywood-style alien or Terminator-AI attack would do incredible things for uniting humanity. Unfortunately, AGI irl is unlikely to present in such a way that would make it an easy thing to outgroup…
comment by Gordon Seidoh Worley (gworley) · 2020-10-12T18:20:17.407Z · LW(p) · GW(p)
I recently watched all 7 seasons of HBO's "Silicon Valley" and the final episode (or really the final 4 episodes leading up into the final one) did a really great job of hitting on some important ideas we talk about in AI safety.
Now, the show in earlier seasons has played with the idea of AI with things like an obvious parody of Ben Goertzel and Sophia, discussion of Roko's Basilisk, and of course AI that Goodharts. In fact, Goodharting is a pivotal plot point in how the show ends, along with a Petrov-esque ending where hard choices have to be made under uncertainty to protect humanity and it has to be kept a secret due to an information hazard.
Goodhart, Petrov, and information hazards are not mentioned by name in the show, but the topics are clearly present. Given that the show was/is popular with folks in the SF Bay Area tech scene because it does such a good job of mirroring back what it's like to live in that scene, even if it's a hyperbolic characterization, I wonder if and hope that this will helpfully nudge folks towards normalizing taking AI safety seriously and seeing it as virtuous to forgo personal gain in exchange for safeguarding humanity.
I don't expect for things to change dramatically because of the show, but on the margin it might be working to make us a little bit safer. For that reason I think it's likely a good idea to encourage folks not already dedicated to AI safety to watch the show, so long as the effort involved in minimal.
comment by Gordon Seidoh Worley (gworley) · 2020-01-26T05:00:13.505Z · LW(p) · GW(p)
NB: There's something I feel sad about when I imagine what it's like to be others, so I'm going to ramble about it a bit in shortform because I'd like to say this and possibly say it confusingly rather than not say it at all. Maybe with some pruning this babble can be made to make sense.
There's a certain strain of thought and thinkers in the rationality community that make me feel sad when I think about what it must be like to be them: the "closed" individualists. This is as opposed to people who view personal identity as either "empty" or "open".
I'll let Andrés of QRI explain all too briefly:
Closed Individualism: You start existing when you are born, and stop when you die.
Empty Individualism: You exist as a “time-slice” or “moment of experience.”
Open Individualism: There is only one subject of experience, who is everyone.
I might summarize the positions a little differently. Closed individualism is the "naive" theory of individualism: people, agents, etc. are like islands forever separated from each other by the gulf of subjective experience that can only be crossed by sending messages in bottles to the other islands (because you can never leave the island you are on). Empty individualism says that individualism is an after the fact reification and is not a natural phenomenon but rather an illusory artifact of how we understand the world. Open individualism is a position like the thing panpsychists are often trying to backpedal from, that the Universe is experiencing itself through us.
I think other positions are possible. For example, my own thinking is that it's more like seeing these all as partial views that are "right" from a certain frame of thinking but none on its own captures the whole thing. I might call my position something like dialectical empty individualism via comparison to dialectical monism (which I think is the right term to capture my metaphysical position, though neutral monism probably works just as well, ergo neutral empty individualism might be an alternative term).
Anyway, back to the sadness. Now to be fair I feel sad when I think about what it must be like to be anyone who holds tightly to a closed individualism perspective, rationalist or not, but I more often see the extremes of where the closed position takes one among rationalists. I'm making an inference here, but my guess is that a closed individualist view is a large part of what makes things like value drift scary, life extension a top priority, and game & decision theory feel vitally important not just to AI safety but to living life.
And I say all this having previously been a closed individualist for most of my life. And I'm not opposed to the closed individualist view: I'm working on problems in value alignment for AI, I'm signed up for cryonics, and I think better decision theory is worth having. After all, I think closed individualism is right, and not just partially right, but all right, up to the limit of not being willing to say it's right to the exclusion of the other perspectives. I think the closed individualism view feels real to people and is accurately describing both people's experiences of individuality and some of the phenomena that create it.
So why am I sad? In many ways, closed individualism is a view that is built on suffering. It contains within it a great loneliness for creatures like us who want desperately to connect. It says that no matter how hard we try to bridge the gap it will always remain, and many people feel that if they were given the chance to eliminate it and merge with others they wouldn't want to because then they'd lose themselves. To be a closed individualist is to live in fear: fear of death, fear of change, fear of loss of control. To me, that's sad, because there's another way.
The closed individualist might object "so what, this is a teleological argument: I might not want it to be that I am isolated and suffer, but closed individualism is what the world looks like, and I can't be hurt by what is already true, so I maintain this position is right". But I think this is wrong, because closed individualism is "wrong" in the sense that it doesn't tell the whole story. If you're looking for the theory that's the most scientifically defensible, that for sure is empty individualism, not closed individualism, but it's also very hard to get an intuitive grasp on empty individualism that you can live with and not sometimes think of yourself as a closed individual, so this tends to leave even the person who believes empty individualism is right acting as if closed individualism is how the world works.
The way out lies through open individualism, but this is a hard one to write about. Until you've felt the joy of open hearted connectedness to all being with ever fiber of existence, I think you'd have a hard time taking this view seriously, and the only way to feel this and take it seriously is probably through hundreds if not thousands of hours of meditation (you can also feel it with drugs but I think it's more likely a person would dismiss or misunderstand the feeling as just a cool thing they felt on drugs). The "I am not it but it is me" sense you get is not really possible to explain to others; you have to see it for yourself because it exists somewhere beyond distinction such that it can never be brought back to a world carved up into more than one whole.
So here we are, trapped in a world of suffering, persisted because every closed individualist suffers and generates more suffering for the whole because it is in all of us. Thus am I sad.
Replies from: Dagon, Viliam↑ comment by Dagon · 2020-01-27T18:00:58.666Z · LW(p) · GW(p)
[upvoted for talking about something that's difficult to model and communicate about]
Hmm. I believe (with fairly high confidence - it would take a big surprise to shift me) a combination of empty and closed. Moments of self-observed experience are standalone, and woven into a fabric of memories in a closed, un-sharable system that will (sooner than I prefer) physically degrade into non-experiencing components.
I haven't found anyone who claims to be open AND is rational enough to convince me they're not just misstating what they actually experience. In fact, I'd love to hear someone talk about what it means to "want" something if you're experiencing all things simultaneously.
I'm quite sympathetic to the argument that it is what it is, and there's no reason to be sad. But I'm also unsure whether or why my acceptance of closed-empty existence makes you sad. Presumably, if your consciousness includes me, you know I'm not particularly sad overall (I certainly experience pain and frustration, but also joy and optimistic anticipation, in a balance that seems acceptable).
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-01-27T20:26:09.234Z · LW(p) · GW(p)
But I'm also unsure whether or why my acceptance of closed-empty existence makes you sad.
Because I know the joy of grokking the openness of the "individual" and see the closed approach creating inherent suffering (via wanting for the individual) that cannot be accepted because it seems to be part of the world.
↑ comment by Viliam · 2020-01-26T22:13:41.173Z · LW(p) · GW(p)
I wonder how much the "great loneliness for creatures like us" is a necessary outcome of realizing that you are an individual, and how much it is a consequence of e.g. not having the kinds of friends you want to have, i.e. something that you wouldn't feel under the right circumstances.
From my perspective, what I miss is people similar to me, living close to me. I can find like-minded people, but they live in different countries (I met them on LW meetups). Thus, I feel more lonely than I would feel if I lived in a different city. Similarly, being extraverted and/or having greater social skills could possibly help me find similar people in my proximity, maybe. Also, sometimes I meet people who seem like they could be what I miss in my life, but they are not interested in being friends with me. Again, this is probably a numbers game; if I could meet ten or hundred times more people of that type, some of them could be interested in me.
(In other words, I wonder whether this is not yet another case of "my personal problems, interpreted as a universal experience of the humankind".)
Yet another possible factor is the feeling of safety. The less safe I feel, the greater the desire of having allies, preferably perfect allies, preferably loyal clones of myself.
Plus the fear of death. If, in some sense, there are copies of me out there, then, in some sense, I am immortal. If I am unique, then at my death something unique (and valuable, at least to me) will disappear from this universe, forever.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-01-27T20:22:59.570Z · LW(p) · GW(p)
My quick response is that all of these sources of loneliness can still be downstream of using closed individualism as an intuitive model. The more I am able to use the open model the more safe I feel in any situation and the more connected I feel to others no matter how similar or different they are to me. Put one way, every stranger is a cousin I haven't met yet, but just knowing on a deep level that the world is full of cousins is reassuring.
comment by Gordon Seidoh Worley (gworley) · 2019-08-30T18:41:03.225Z · LW(p) · GW(p)
Strong and Weak Ontology
Ontology is how we make sense of the world. We make judgements about our observations and slice up the world into buckets we can drop our observations into.
However I've been thinking lately that the way we normally model ontology is insufficient. We tend to talk as if ontology is all one thing, one map of the territory. Maybe these can be very complex, multi-manifold maps that permit shifting perspectives, but one map all the same.
We see some hints at the breaking of this ontology of ontology as a single map by noticing the way some people, myself included, have noticed you can hold multiple, contradictory ontologies and switch between them. And with further development there's no switching, it just all is, only complex and with multiple projections that overlap.
But there's more. What we've been talking about here has mostly been a "strong" form of ontology that seeks to say something about the being of the world, to reify it into type-objects that can be considered, but there's also a "weak" kind of ontology from which ontology arises and which can exist without the "strong" version. It's the ontology that I referenced at the start of the post, the ontology of discrimination and nothing else. So much of ontology is taking the discrimination and turning it into a full-fledged model or map, but there's a weak notion of ontology that exists even if all we do is draw lines where we see borders.
I can't recall seeing much on this in Western philosophy; I thought about this after combining my reading of Western philosophy with my reading of Buddhist philosophy and what it has to say about how mental activity arises. But Buddhist philosophy doesn't have a strong notion of ontology the way Western philosophy does, so maybe it's not surprising this subtle point has gone missed.
comment by Gordon Seidoh Worley (gworley) · 2019-08-07T01:54:55.393Z · LW(p) · GW(p)
So long as shortform is salient for me, might as well do another one on a novel (in that I've not heard/seen anyone express it before) idea I have about perceptual control theory, minimization of prediction error/confusion, free energy, and Buddhism that I was recently reminded of.
There is a notion within Mahayana Buddhism of the three poisons: ignorance, attachment (or, I think we could better term this here, attraction, for reasons that will become clear), and aversion. This is part of one model of where suffering arises from. Others express these notions in other ways, but I want to focus on this way of talking about these root kleshas (defilements, afflictions, mind poisons) because I think it has a clear tie in with this other thing that excites me, the idea that the primary thing that neurons seek to do is minimize prediction error.
Ignorance, even among the three poisons, is generally considered more fundamental, in that ignorance appears first and it gives rise to attraction and aversion (in some models there is fundamental ignorance that gives rise to the three poisons, marking a separation between ignorance as mental activity and ignorance as a result of the physical embodiment of information transfer). This looks to me a lot like what perceptual control theory predicts if the thing being controlled for is minimization of prediction error: there is confusion about the state of the world, information comes in, and this sends a signal within the control system of neurons to either up or down regulate something. Essentially what the three poisons describe is what you would expect the world to look like if the mind were powered by control systems trying to minimize confusion/ignorance, nudging the system toward and away from a set point where prediction error is minimized via negative feedback (and a small bonus, this might help explain why the brain doesn't tend to get into long-lasting positive feedback loops: it's not constructed for it and before long you trigger something else to down-regulate because you violate its predictions).
It also makes a lot of sense that these would be the root poisons. I think we can forgive 1st millennium Buddhists for not discovering PCT or minimization of prediction error directly, but we should not be surprised that they identified the mental actions this theory predicts should be foundational to the mind and also recognized that they were foundational actions to all others. Elsewhere, Buddhism explicitly calls out ignorance as the fundamental force driving dukkha (suffering), though we probably shouldn't assign too many points to (non-Madhyamaka) Buddhism for noticing this since other Buddhist theories don't make this same claims about attachment and aversion and they are used concurrently in explication of the dharma.
comment by Gordon Seidoh Worley (gworley) · 2022-06-27T02:13:07.195Z · LW(p) · GW(p)
In a world that is truly and completely post-scarcity there would be no need for making tradeoffs.
Normally when we think about a post-scarcity future we think in terms of physical resources like minerals and food and real estate because for many people these are the limiting resources.
But the world is wealthy enough that some people already have access to this kind of post-scarcity. That is, they have enough money that they are not effectively limited in access to physical resources. If they need food, shelter, clothing, materiel, etc. they can get it in sufficient quantities to satisfy their needs. And yet these post-scarcity people still have a scarce resource they can't get enough of: time.
Because time is limited, we must be judicious in its use. In a world of true post-scarcity, there would be enough time that it would be effectively unlimited, say quadrillions of years of subjective experience. With so much time on our hands, we would not need to make judicious use of time. And that goes not just for our own time but for the time of others.
Today much of how we pick jobs, romantic partners, hobbies, etc. is determined by tradeoffs. Given many options but limited time and resources, we pick ones that let us maximize given the constraints. For example, maybe I'd really like to teach history and paint landscapes, but teaching history doesn't pay me much and painting landscapes takes more spare time than I have, so instead I get a job in accounting and watch TikToks. They offer me a better tradeoff in terms of the things I want: getting paid and having fun with my spare time. I'm not necessarily happy that I cannot do better, but I am happy to at least do the best I can.
In a world with effectively unlimited time, these constraints would not apply and there would be no need to make tradeoffs.
Now, unfortunately, I don't expect to ever see such a true world of post-scarcity. There's a few things that are absolutely limited in this world like time and energy. Although we can get a lot of them, perhaps more than we know what to do with, they're still limited, and so long as there's sufficient competition from others I expect those resources to get eaten up. Lots of free time and energy? Let's create some more people to make use of them. Maybe not Malthusian Trap, repugnant conclusion many of them, but a lot of them. Enough that we have to make time tradeoffs again. If we're not making time tradeoffs, we're not making enough people or using enough energy.
And so I expect to always face these limits. We'll likely face different limits than we do today. For example, I'm not worried that in a post-scarcity future we wouldn't have time to both teach history and paint landscapes if that's what we wanted to do, but I do worry that we'd only have time to spend 10^29th lifetimes as a bird or snail, never getting to fully explore the full space of possible experiences. It'll be a different kind of tradeoff we'll have to make, but a tradeoff nonetheless.
Replies from: Viliam↑ comment by Viliam · 2022-06-30T19:44:11.956Z · LW(p) · GW(p)
There will be always a way to ruin post-scarcity, if humanity reproduces exponentially. Unless some new laws of physics are discovered that would allow unlimited exponential growth. Or maybe future legislation will make reproduction the only remaining scarce thing. As people currently get richer, they have fewer babies on average, but the reason is that we live in (from historical perspective) unprecedented luxury that we now take for granted, and need to give up a part of it when taking care of kids. Post-scarcity robotic nannies could easily revert this trend.
I wonder what it is like to be super rich. I can easily imagine burning lots of money for things that my current self would consider reasonable. First, I could somewhat trade money for time, by paying people to do stuff that I want to get done but isn't inherently enjoyable and would take too much time to do it myself. Second, I could move to more ambitious projects that are currently clearly out of my reach so I usually do not even think much about them. Third, there are global projects like solving poverty or curing malaria, that even Bill Gates cannot handle alone.
Yeah, immortality would be nice; it would remove a lot of pressure from... almost everything. I wonder whether humans invent some way to ruin this, too. For example, imagine a culture that you want to be a part of, that updates in some way frequently (changes its norms; evolves new jargon), so need to spend a lot of time every day keeping up with it; and if you fall of the wagon once, it will be very difficult to join again. Maybe to avoid low status, you will need to spend a lot of time doing some stupid things that you do not enjoy, but it will be a kind of multiplayer prisonner's dilemma. Some kind of trap, where people get punished for (a) refusing to sacrifice to Moloch, and (b) interacting with those who get punished; and even if many of your friends would agree that the system is stupid, they would not be ready to get socially shunned by the rest of humanity forever. In a more dystopian version, all human communication would be monitored, and merely saying "this is stupid" or otherwise trying to create common knowledge could get you called out and punished.
comment by Gordon Seidoh Worley (gworley) · 2022-04-03T01:01:10.447Z · LW(p) · GW(p)
If I want to continue to rack up Good Heart Tokens I now have to make legit contributions, not just make a bid to feed me lots of karma because I'm going to donate it [LW · GW].
So, what would be an interesting post you'd enjoy reading from me? It'll have to be something I can easily put together without doing a lot of research.
I unfortunately don't have a backlog of things to polish up and put out because I've been working on a book [LW · GW], and although I have draft chapters none of them is quite ready to go out. I might be able to get one of them out the door before GHT go away, but I'd rather use this as a chance to produce some one off content, only my mind hasn't been turned to collecting good topics for self-contained posts lately.
So, what would you like to see me write?
comment by Gordon Seidoh Worley (gworley) · 2022-03-02T20:43:47.155Z · LW(p) · GW(p)
One of the nice things in my work is I can just point to when I think something human is getting in the way. Like, sometimes someone says an idea is a bad idea. If I dig in, sometimes there's a human reason they say that: they don't actually think it's a bad idea, they just don't think they will like doing the work to make the idea real, or something similar. But those are different things, but it's important to have a conversation to sort that out and then we can move forward on two topics: is the idea good and why don't you want to be involved with it.
But in online conversations, especially on LW, people often feel like it's rude to come after someone's humanness. If you disagree with an idea, it's only normatively acceptable to talk about the idea, not about your motivations for disagreeing. Yes, this comes from standards needed to separate ideas from people and is generally useful, but sometimes it gets in the way and covers up the real reason for a disagreement.
For example, maybe someone suggests we should have prediction markets for everything. You say that sounds terrible. But really you had a personal experience with prediction markets where someone posted the question "will you break up with your romantic partner?", everyone bet "yes", and then it came true, and now you have it out for prediction markets, but you don't say that, you just have lots of reasons why prediction markets are a bad idea. But if we only talk about your purported reasons we'll never get to the heart of the objection!
I think we make a mistake in talking about ideas and forgetting that it's humans doing the talking. Separating ideas from people does some good: naively not separating them creates all kinds of problems which is why we have this bit of social tech in place! But it also can go too far, and we need to find specific ways to let the humans back into the idea discussions so we can address the sources of the ideas, not just the ideas themselves. Seems relevant to convincing others, uncovering the reasons for your own beliefs, and building consensus about what is true of the world.
Replies from: TLW↑ comment by TLW · 2022-03-03T05:22:51.851Z · LW(p) · GW(p)
An issue with sharing human stories is the juxtaposition between:
- Many people are/must be anonymous online.
- Sharing human stories is often self-doxing.
↑ comment by Matt Goldenberg (mr-hire) · 2022-03-03T09:54:15.774Z · LW(p) · GW(p)
Do you have a human story about why sharing stories is self-doxxing? I imagine most stories can be told in a way that doesn't doxx, especially if you change some details that are irrelevant to the crux.
Replies from: TLW↑ comment by TLW · 2022-03-04T00:28:56.265Z · LW(p) · GW(p)
Some stories aren't. That being said, many stories are. I would give examples from my own experience on this site, but they are, uh, self-doxing.
especially if you change some details that are irrelevant to the crux.
Most of the issues arise either a) when the crucial details are themselves the details that you have to hide ("How can you be an expert on X given that there's about a half-dozen people that know X?" is a classic, for instance.), or b) the story in isolation doesn't leak enough bits of information to self-dox, but when combined with other already-told (and hence irrevocable) stories is enough.
(Remember, you only need ~33 bits of information to uniquely identify an individual[1]. That's tiny.)
- ^
Although of course this can be more difficult in practice.
comment by Gordon Seidoh Worley (gworley) · 2021-03-08T01:37:22.496Z · LW(p) · GW(p)
Hogwarts Houses as Religions
Okay, this is just a fun nonsense idea I thought up. Please don't read anything too much into it, I'm just riffing. Sorry if I've mischaracterized a religion or Hogwarts house!
What religion typifies each Hogwarts house?
I'll start with Hufflepuff, which I think is aligned with Buddhism: treat everyone the same, and if you want salvation the only option is to do multiple lifetimes worth of work.
Next is Ravenclaw, which looks a lot like Judaism: there's a system to the world, you gotta follow the rules, and also lets debate and research endlessly to understand the last corner of things we don't yet understand.
For Gryffindor, I'm saying Christianity: some things are just right and true and worth going on crusades for, and if you want to get into heaven you just gotta believe hard enough (but also good works might help).
Okay, now for the tough one, Slytherin. I'm going to go with Sorting Hat Chats and say Slytherin is the house of in-group loyalty and base my decision purely on that rather than all the other cruft associated with this house. And if we're going by that, then I think we gotta link up Slytherin with Islam: if you're in you're family; if you're not in but in-adjacent you'll get second-class treatment that's still pretty fair; if you're out then you're out and all bets are off.
Now obviously there's lots of caveats here, like none of these "religions" I've picked out are really one single thing but rather multiple traditions and practices some of which don't fit the pattern above well. Look, I get it. I'm just trying to point at something like the core thing associated with each of these religions that may be deviated from by various branches, e.g. Quakers are obviously Hufflepuffs, atheist are I guess Ravenclaws, LDS is probably Slytherin based on my reasoning above, etc.. This is just meant to be fun and sorry if I offended anyone who comes across this.
comment by Gordon Seidoh Worley (gworley) · 2019-09-08T23:00:34.693Z · LW(p) · GW(p)
If CAIS if sufficient for AGI, then likely humans are CAIS-style general intelligences.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-09-09T15:44:55.061Z · LW(p) · GW(p)
What's the justification for this? Seems pretty symmetric to "If wheels are sufficient for getting around, then its' likely humans evolved to use wheels."
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-09T20:38:10.957Z · LW(p) · GW(p)
Human brains look like they are made up of many parts with various levels and means of integration. So if it turns out to be the case that we could build something like AGI via CAIS, that is CAIS can be assembled in a way that result in general intelligence, then I think it's likely that human intelligence doesn't have anything special going on that would meaningfully differentiate it from the general notion of CAIS other than being implemented in meat.
comment by Gordon Seidoh Worley (gworley) · 2021-05-24T13:34:38.165Z · LW(p) · GW(p)
Robert Moses and AI Alignment
It's useful to have some examples in mind of what it looks like when an intelligent agent isn't aligned with the shared values of humanity. We have some extreme examples of this, like paperclip maximizers, and some less extreme but extreme in human terms examples, like dictators like Stalin, Mao, and Pol Pot who killed millions in the pursuit for their goals, but these feel like outliers that people can too easily make various arguments for being extreme and that no "reasonable" system would have these problems.
Okay, so let's think about how hard it is to just get "reasonable" people aligned, much less superintelligent AIs.
Consider Robert Moses, a man who achieved much at the expense of wider humanity. He worked within the system, gamed it, did useful things incidentally since they happened to bring him power or let him build a legacy, and then wielded that power in ways that harmed many while helping some. He was smart, generally caring, and largely aligned with what seemed to be good for America at the time, yet still managed to pursue courses of action that were really aligned with humanity as a whole.
We have plenty of other examples, but I think most of them don't put it quite into the kind of stark contrast Moses does. He's a great example of the kind of failure mode you can expect from inadequate alignment mechanism (though on a smaller scale): you get something that's kinda like what you wanted, but also bad in ways you probably didn't anticipate ahead of time.
Replies from: ChristianKl↑ comment by ChristianKl · 2021-05-25T13:11:31.791Z · LW(p) · GW(p)
He worked within the system, gamed it, did useful things incidentally since they happened to bring him power or let him build a legacy, and then wielded that power in ways that harmed many while helping some.
I don't think Moses did useful things just because they brought him into power. From reading Caro's biography it seems to me that especially at the beginning Moses had good intentions.
When it comes to parks, parks are also not just helping some people but helped most people. When Moses caused a park to be build when the money would have better spent on a new school, the issue isn't that less people profited from the park then would have profited from the school.
I think a key problem with Moses is that as his power grew, his workload also grew. Instead of delegating some of his power to people under him he made decisions about projects where he had little time to invest into the project.
If we would have invested the time he could have likely understood that mothers who want to go with small children to the park have a problem when they use a stroller and the entry of the park has stairs. Moses however cut himself of from being questioned and as a result such an issue didn't get addressed when planning for new parks.
Other problems came from him doing things to keep up his power by making the system both intransparent and corrupt.
While intransparency might come with an AGI, I would be more surprised if issues arises because they AGI cuts itself from information flow or the AGI doesn't have enough time to manage his duties. The AGI can just spin up more instances.
comment by Gordon Seidoh Worley (gworley) · 2021-02-10T14:49:10.485Z · LW(p) · GW(p)
Won't I get bored living forever?
I feel like this question comes up often as a kind of push back against the idea of living an unbounded number of years, or even just a really really long time beyond the scale of human comprehension for what it would mean to live that many years.
I think most responses rely on intuition about our lives. If your life today seems full of similar days and you think you'd get bored, not living forever or at least taking long naps between periods of living seems appealing. Alternatively, if your life today seems full of new experiences, you'd expect to keep having new experiences if you lived a long time. That's probably a good description of why most people believe what they do.
But I think we can make a better argument for why living a long time wouldn't result in your getting bored.
Much of the idea that you'd get bored is predicated on the idea that one day (so much as that remains a coherent concept in the future) is much like another. In fact, maybe you live the same day many times and that seems boring and not worth living. Yet days can only be the same from one's understanding of them, or else be literally the same day and thus there would be no reason not to keep reliving it.
To explain, fundamentally no moment in the universe seems to be the same as any other moment because universe moments vary on dimensions that guarantee this. Yes, there may be many quite similar moments, but now two moments are literally the same. The only exception to this is that you might run a simulation in the universe that creates days that are identical from within the frame of the simulation. But if you do this then there's no problem with living the same day multiple times, because if the same day is literally the same day that means you must be the same on each of these days and would thus have no realization you were reliving the same day (i.e. it's "Groundhog day" but no one realizes it's a loop). Yes, to us this might look like a kind of wireheading and might prefer that not be how we live, but from the inside we wouldn't object and if we did it wouldn't matter because the day would just reset and our objection we arise afresh each day without anything ever changing or us ever getting tired of living the same day over and over.
Thus we are left with days that are in fact unique, even if we believe them to be the same, and so we can get bored only to the extent we don't care about the uniqueness of each day. This seems quite satisfying to me as a reason to want to live forever, to get to see the unfolding of events over billions of years or longer, but even if it's not I think it should give you hope that there might be something to do over billions of years since notions of sameness of days are likely either limitations of human imagination or the result of being trapped in a looping simulation (which, again, you shouldn't care about from the inside of the simulation).
Replies from: Viliam, Gunnar_Zarncke, Dagon↑ comment by Viliam · 2021-02-14T15:57:55.297Z · LW(p) · GW(p)
When people barely live 100 years and we worry about them getting bored if they could live forever... that seems to me like finding a beggar who only has $100 net worth and is asking for some spare change, and explaining to him that giving him more money would be bad because eventually he would become a billionaire and everyone knows that power corrupts. Yeah, it has some philosophical merit, but is completely unrelated to the life as we know it.
↑ comment by Gunnar_Zarncke · 2021-02-11T00:26:19.827Z · LW(p) · GW(p)
For me, this looks like a very simplified treatment (I mean in a I-need-to-simplify-to-model-it way; I wanted to avoid the word 'academic'). While the word boredom as you seem to use it is a very practical and complex emotion. I can't disagree with your model but I don't think it captures what people feel is boring now or what would be boring in the future. I think a good counterpoint is the one by Yoav, that you can just go to sleep until something new comes up. Something that is not possible if your time is limited, to begin with.
↑ comment by Dagon · 2021-02-10T16:49:20.682Z · LW(p) · GW(p)
When this argument is presented to me, there are two counterpoints I often use:
- Simple induction. I wasn't bored enough to want to die yesterday, nor the day after that (today). Assuming that future days are roughly as similar as the past two, that degree of novelty is sufficient.
- Options are not commitments. If I ever do want to die, I can do so. If it never happens, or doesn't happen for a thousand or a hundred thousand years, that's fine too.
For those who really want to engage on #2, I've had interesting conversations about akrasia-like self-disagreements where "I am bored and would prefer to have died" but "I have FOMO and will not willingly die". For this, there is a possibility of mechanism design, where the decision can be made rule-based. Something like "after N years (say, 3/4 the median lifespan of your reference group), take a permanent poison, such that you must take an antidote every week/year. If you ever get bored/unhappy enough to not take the antidote, you die.)
A tougher disagreement is the Malthusian one - old people are already too powerful, and it'll get far worse if they're healthy and active for centuries (let alone longer). Further, they take resources/opportunities from the young. The availability heuristic for this is vampires, not techno-utopia. I have yet to really find a good counterargument for this - it quite likely contains a fair grain of truth, at least for the current planetary and human governance limitations.
Replies from: Yoav Ravid, Gunnar_Zarncke↑ comment by Yoav Ravid · 2021-02-10T17:01:00.473Z · LW(p) · GW(p)
Another option on two is to go into some kind of preservation instead of total death forever. then you can write instructions on when to wake you up (if X person asks, if X event happens, in X years, etc..). you still miss out on some stuff, but not literally everything.
The second benefit is that for people who stayed it's not like you died and they'll never be able to interact with you again, you just took a really long vacation :)
↑ comment by Gunnar_Zarncke · 2021-02-11T00:33:02.162Z · LW(p) · GW(p)
If powerful old people go to sleep when they are bored then they run the risk of being overtaken by younger, faster, and less risk-averse people. Maybe a good model is corporations: Corporations are also immortal and can learn more and more but they also have more to lose and they seem to acquire knowledge that also seems to slow them down. If there are changes in the environment or innovations they often cannot adapt fast enough and are quickly overtaken by younger players.
comment by Gordon Seidoh Worley (gworley) · 2020-08-10T16:50:21.857Z · LW(p) · GW(p)
Personality quizzes are fake frameworks [LW · GW] that help us understand ourselves.
What-character-from-show-X-are-you quizzes, astrology, and personality categorization instruments (think Big-5, Myers-Briggs, Magic the Gathering colors, etc.) are perennially popular. I think a good question is to ask, why do humans seem to like this stuff so much that even fairly skeptical folks tend to object not to categorization but that the categorization of any particular system is bad?
My stab at an answer: humans are really confused about themselves, and are interested in things that seem to have even a little explanatory power to help them become less confused about who they are. Metaphorically, this is like if we lived in a world without proper mirrors, and people got really excited about anything moderately reflective because it let them see themselves, if only a little.
On this view, these kinds of things, while perhaps not very scientific, are useful to folks because they help them understand themselves. This is not to say we can totally rehabilitate all such systems, since often they perform their categorization by mechanisms with very weak causal links that may not even rise above the level of noise (*cough* astrology *cough*), nor that we should be satisfied with personality assessments that involve lots of conflation and don't resolve much confusion, but on the whole we should be happy that these things exist because they help us see our psyches in the absence of proper mental mirrors.
(FWIW, I do think there is a way to polish you mind into a mirror that can see itself and that I have managed to do this to some extent, but that's a bit besides the point I want to make here.)
Replies from: Dagon↑ comment by Dagon · 2020-08-10T20:18:32.169Z · LW(p) · GW(p)
They help us understand others as well - even as fake frameworks, anything that fights against https://wiki.lesswrong.com/wiki/Typical_mind_fallacy is useful. I'd argue these categorizations don't go far enough, and imply a smaller space of variation than is necessary for actual modeling of self or others, but a lot of casual observers benefit from just acknowledging that there IS variation.
comment by Gordon Seidoh Worley (gworley) · 2020-04-02T15:30:02.445Z · LW(p) · GW(p)
As I work towards becoming less confused about what we mean when we talk about values, I find that it feels a lot like I'm working on a jigsaw puzzle where I don't know what the picture is. Also all the pieces have been scattered around the room and I have to find the pieces first, digging between couch cushions and looking under the rug and behind the bookcase, let alone figure out how they fit together or what they fit together to describe.
Yes, we have some pieces already and others think they know (infer, guess) what the picture is from those (it's a bear! it's a cat! it's a woman in a fur coat!), and as I work I find it helpful to keep updating my own guess because even when it's wrong it sometimes helps me think of new ways to try combining the pieces or to know what pieces might be missing that I should go look for, but it also often feels like I'm failing all the time because I'm updating rapidly based on new information and that keeps changing my best guess.
I suspect this is a common experience for folks working on problems in AI safety and many other complex problems, so I figured I'd share this metaphor I recently hit on for making sense of what it is like to do this kind of work.
comment by Gordon Seidoh Worley (gworley) · 2020-01-28T22:55:01.007Z · LW(p) · GW(p)
Most of my most useful insights come not from realizing something new and knowing more, but from realizing something ignored and being certain of less.
comment by Gordon Seidoh Worley (gworley) · 2019-12-10T19:47:09.263Z · LW(p) · GW(p)
After seeing another LW user (sorry, forgot who) mention this post in their commenting guidelines, I've decided to change my own commenting guidelines to the following, matching pretty close to the SSC commenting guidelines that I forgot existed until just a couple days ago:
Comments should be at least two of true, useful, and kind, i.e. you believe what you say, you think the world would be worse without this comment, and you think the comment will be positively received.
I like this because it's simple and it says what rather than how. My old guidelines were all about how:
Seek to foster greater mutual understanding and prefer good faith to bad, nurture to combat, collaboration to argument, and dialectic to debate. Do that by:
-aiming to understand the author and their intent, not what you want them to have said or fear that they said
-being charitable about potential misunderstandings, assuming each person is trying their best to be clearly understood and to advance understanding
-resolving disagreement by finding the crux or synthesizing contrary views to sublimate the disagreement
I'm fairly tolerant, but if you're making comments that are actively counterproductive to fruitful conversation by failing to read and think about what someone else is saying I'm likely to ask you to stop and, if you don't, delete your comments and, if you continue, ban you from commenting on my posts.Some behavior that is especially likely to receive warnings, deletions, and bans:
-trying to "score points"
-hitting "applause lights [LW · GW]"
-being contrarian for its own sake
More generally, I think the SSC commenting guidelines might be a good cluster for those of us who want LW comment sections to be "nice" and so mark our posts as norm enforcing. If this catches on this might help deal with finding the few clusters of commenting norms that make people want without having lots of variation between authors.
comment by Gordon Seidoh Worley (gworley) · 2019-12-10T19:36:20.634Z · LW(p) · GW(p)
http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html
I similarly suspect automation is not really happening in a dramatically different way thus far. Maybe that will change in the future (I think it will), but it's not here yet.
So why so much concern about automation?
I suspect because of something they don't look at in this study much (based on the summary): displacement. People are likely being displaced from jobs into other jobs by automation or the perception of automation and some few of those exit the labor market rather than switch into new jobs. Further, those who do move to new jobs likely disprefer their new jobs because they require different skills, they are less skilled at them immediately after switching, and due to lack of initial skill these new jobs initially pay less than the old jobs. This creates compelling evidence for the automation "destroying" jobs story even though the bigger picture makes it clear that this isn't really happening, in particular because the destroying job story ignores the contrary evidence from what happens after a worker has been in a new job after displacement for a few years and have recovered to pre-displacement levels of wages.
comment by Gordon Seidoh Worley (gworley) · 2022-06-23T18:45:52.092Z · LW(p) · GW(p)
I started showing symptoms and testing positive for COVID on Saturday. I'm now over nearly all the symptoms other than some pain in parts of my body and fatigue.
The curious question in my mind is, what's causing this pain and fatigue and what can be done about it?
My high-level, I'm-not-a-doctor theory is that there's something like generalized inflammation happening in my body, doing things makes it worse, and then my body sends out the signal to rest in order to get the inflammation back down. Once it's down I can do things for a while until it builds up again and I have to stop. This is based on roughly the idea that it just feels like things are a bit sore but then it gets better if I rest and then comes back from doing nothing much more other than moving, sitting, standing, etc.
So maybe just taking anti-inflammatory drugs would help? I've done that when the pain got bad enough, but I didn't notice it doing anything like letting me go longer without needing to rest.
I'm not yet in anything like long-COVID territory here since it's not even been a week, so perhaps it's reasonable to say I just need to keep resting and let my body heal, which I think is good advice to be sure, but I also want to put out a feeler to see if there's something more I could be doing that would be helpful.
comment by Gordon Seidoh Worley (gworley) · 2021-04-19T16:26:27.442Z · LW(p) · GW(p)
Maybe spreading cryptocurrency is secretly the best thing we can do short term to increase AI safety because it increases the cost of purchasing compute needed to build AI. Possibly offset, though, by the incentives to produce better processors for cryptocurrency mining that are also useful for building better AI.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2021-04-19T21:30:14.603Z · LW(p) · GW(p)
I'd say "more than offset". Increases chip makers' economies of scale and justifies higher R&D outlays...
comment by Gordon Seidoh Worley (gworley) · 2021-01-05T20:26:32.528Z · LW(p) · GW(p)
This post suggests a feature idea for LessWrong to me:
https://www.lesswrong.com/posts/6Nuw7mLc6DjRY4mwa/the-national-defense-authorization-act-contains-ai [LW · GW]
It would be pretty cool if, instead of a lot of comments that have an order determined by votes or time of posting it were instead possible to write a post that had part that could be commented on directly. So, for example, say the comments for a particular section could live straight in the section rather than down at the bottom. Could be an interesting way to deal with lots of comments on large, structured posts.
Replies from: Pattern↑ comment by Pattern · 2021-01-06T03:48:58.605Z · LW(p) · GW(p)
You have reinvented Google Docs.
A similar effect could be achieved by having a sequence which...all appears on one page. (With the comments.)
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2021-01-06T12:31:43.836Z · LW(p) · GW(p)
Medium also has this feature, and I think it improves the Medium discourse quite a bit.
Replies from: Pattern↑ comment by Pattern · 2021-01-06T22:42:16.825Z · LW(p) · GW(p)
Which feature?
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2021-01-06T23:27:07.134Z · LW(p) · GW(p)
Commeting on specific parts of articles and seeing those comments as you go through the article.
comment by Gordon Seidoh Worley (gworley) · 2020-02-14T01:09:00.457Z · LW(p) · GW(p)
I few months ago I found a copy of Staying OK, the sequel to I'm OK—You're OK (the book that probably did the most to popularize transactional analysis), on the street near my home in Berkeley. Since I had previously read Games People Play and had not thought about transactional analysis much since, I scooped it up. I've just gotten around to reading it.
My recollection of Games People Play is that it's the better book (based on what I've read of Staying OK so far). Also, transactional analysis is kind of in the water in ways that are hard to notice so you are probably already kind of familiar with some of the ideas in it, but probably not explicitly in a way you could use to build new models (for example, as far as I can tell notions of strokes and life scripts were popularized by if not fully originated within transactional analysis). So if you aren't familiar with transactional analysis I recommend learning a bit about it since although it's a bit dated and we arguably have better models now, it's still pretty useful to read about to help notice patterns of ways people interact with others and themselves, sort of like the way the most interesting thing about Metaphors We Live By is just pointing out the metaphors and recognizing their presence in speech rather than whether the general theory is maximally good or not.
One things that struck me as I'm reading Staying OK is its discussion of the trackback technique. I can't find anything detailed online about it beyond a very brief summary. It's essentially a multi-step process for dealing with conflicts in internal dialogue, "conflict" here being a technical term referring to crossed communication in the transactional analysis model of the psyche. Or at least that's how it's presented. Looking at it a little closer and reading through examples in the book that are not available online, it's really just poorly explained memory reconsolidation [LW · GW]. To the extent it's working as a method in transactional analysis therapy, it seems to be working because it's tapping into the same mechanisms as Unlocking the Emotional Brain [LW · GW].
I think this is interesting both because it shows how we've made progress and because it shows that transactional analysis (along with a lot of other things), were also getting at stuff that works, but less effectively because they had weaker evidence to build on that was more confounded with other possible mechanisms. To me this counts as evidence that building theory based on phenomenological evidence can work and is better than nothing, but will be supplanted by work that manages to tie in "objective" evidence.
comment by Gordon Seidoh Worley (gworley) · 2019-12-26T20:50:22.512Z · LW(p) · GW(p)
Off-topic riff on "Humans are Embedded Agents Too [LW · GW]"
One class of insights that come with Buddhist practice might be summarized as "determinism", as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of "dependent origination", that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology creates a gulf that cuts us off from direct interaction, causing us to confuse map and territory. Much of the path of practice is learning to unlearn this useful confusion that allows us to do much by focusing on the map so we can make better predictions about the territory.
In AI alignment many difficulties and confusions arise from failing to understand what there is termed embeddedness, the insight that everything happens in the world not alongside it on the other side of a Cartesian veil. The trouble is that dualism is pernicious and intuitive to humans, even as we deny it, and unlearning it is not as simple as reasoning that the problem exists. Our thinking is so polluted with dualistic notions that we struggle to see the world any other way. I suspect if we are to succeed at building safe AI, we'll have to get a lot better at understanding and integrating the insight of embeddedness.
comment by Gordon Seidoh Worley (gworley) · 2022-10-06T15:08:23.549Z · LW(p) · GW(p)
I just noticed something odd. It's not that odd: the cognitive bias that powers it is well know. It's more odd that a company is leaving money on the table by not exploiting it.
I primarily fly United and book rental cars with Avis. United offers to let you buy refundable fares for a little more than the price of a normal ticket. Avis let's you pre-pay for your rental car to receive a discount. These are symmetrical situations presented with a different framing because the default action is different in the two cases: on United the default is to have a non-refundable ticket and with Avis the default is to have an effectively refundable rental (because you don't pay until pickup).
I find that I basically never buy refundable tickets from United and never pre-pay for my rental car. My mind offers these reasons: the refundable airline fare is effectively insurance and not worth it in expectation since I have to money to eat the cost of the ticket (and in practice, because I'm a loyal customer with status, they'll almost always let me convert my fare to future ticket credit rather than take my money), and pre-paying for the rental car takes away flexibility in travel plans I want to have.
But this is crazy! I should be pre-paying for the rental car if I'm effectively doing the same for the flight. The situation is basically the same. Yet I don't because of which things is the default action.
So what's odd is that United is not taking advantage of this to make refundable fares the default and let me choose a discount to get a non-refundable fare. Maybe now that I'm used to the current situation I would always choose the non-refundable fare, but they'd likely get more people to buy them if it was the default action.
I wonder why they don't? My best guesses are regulation preventing that and price competition making it more worthwhile to make the cheapest fare possible the default and sell everything else as an add-on rather than offering discounts. But then why is the rental car market different?
Replies from: martin-vlach↑ comment by Martin Vlach (martin-vlach) · 2022-10-06T17:28:15.496Z · LW(p) · GW(p)
My guess is that "rental car market" has less direct/local competition while the airlines are centralized on the airport routes and many cheap flight search engines( ex. Kiwi.com) make this a favorable mindset.
Is there a price comparison for car rentals?
comment by Gordon Seidoh Worley (gworley) · 2021-05-07T19:01:52.905Z · LW(p) · GW(p)
Isolate the Long Term Future
Maybe this is worthy of a post, but I'll do a short version here to get it out.
- In modern computer systems we often isolate things to increase reliability.
- If one isolated system goes down, the others keep working.
- Examples:
- multiple data centers spread around the world
- using multiple servers that all do the same thing running in those different data centers
- replicating data between data centers
- isolating customers within a single data center so if one goes down only the customers using that data center are affected
- We can do the same kind of thing with the long term future of humanity.
- We can send out "seed probes" to create offshoots in other parts of the universe.
- Eventually those parts of the universe will recede beyond each others Hubble volume.
- Then even if one Hubble volume suffers existential catastrophe, the others keep going.
- On shorter time scales, we can aim to increase isolation, possibly also using replication, to get better resiliency of the future.
- Some shorter term examples include
- isolated communities on Earth that are self sufficient
- extra-planetary colonies
- and eventually extra galactic colonies, that eventually become fully isolated
- Some shorter term examples include
comment by Gordon Seidoh Worley (gworley) · 2021-03-29T19:02:43.249Z · LW(p) · GW(p)
Psychological Development and Age
One of the annoying things about developmental psychology is disentangling age-related from development-related effects.
For example, as people age they tend to get more settled or to more have their lives sorted out. I'm pointing at the thing where kids and teenages and adults in their 20s tend to have a lot of uncertainty about what they are going to do with their lives and that slowly decreases over time.
A simple explanation is that it's age related, or maybe more properly experience related. As a person lives more years, tries more things, and better learns the limits of what they can achieve they become more settled into the corner of the world in which they can successfully operate. We should expect people with less years of experience to be more confused about themselves and older people to be less confused because they had more years to figure things out. This need not invoke developmental psychology, just simple accumulation of evidence.
But counter examples of resigned young people and curious and adventurous old people abound. But perhaps we can explain these simply in terms of contexts that cause that, and if a person were put in a different context they might behave differently and in a more age anticipated way.
Where I think developmental psychology helps is in understanding subtler differences in how people behave and why. For example, is a person content with their life because they've tried a bunch of things and given up stretching too far because they know it'll just be hard to do new things and they'll probably fail so it's not worth the effort, or are they content with their life because they trust themselves. It can be kinda hard to tell from the outside and you might not even be able to tell with an intervention, say by making something easier for them so they can do some new thing that they previously could not do easily and seeing how they respond.
This is one of the challenges I think we face in talking about developmental psychology. It seems a useful model for explaining many aspects of how minds progress over time, but it's hard to figure out what's conflated with things that would happen in a world with no developmental psychology beyond simple learning.
(Don't worry; I haven't suddenly found developmental psychology less useful, just musing a bit on an issue that comes up often of the form "this developmental model sounds fine as far as it goes, but what about this simpler explanation".)
comment by Gordon Seidoh Worley (gworley) · 2021-01-11T22:07:55.029Z · LW(p) · GW(p)
ADHD Expansionism
I'm not sure I fully endorse this idea, hence short form, but it's rattling around inside my head and maybe we can talk about it?
I feel like there's a kind of ADHD (or ADD) expansionism happening, where people are identifying all kinds of things as symptoms of ADHD, especially subclinical ADHD.
On the one had this seems good in the sense that performing this kind of expansionism seems to actually be helping people by giving them permission to be the way they are via a diagnosis and giving them strategies they can try to live their life better.
On the other I feel like it's terrible in terms of actually diagnosing ADHD. It might help to explain why I think that.
Much of what I see that I'm terming ADHD expansionism looks to me like taking normal human behavior that is ill fitted to the modern environment and then pathologizing it. As best I can tell, it's normal and adaptive for humans to exhibit various behaviors that that get labeled as ADHD symptoms, like flittering between multiple activities, hyperfocus on things the mind finds important but doesn't necessarily endorse as important (S1 important things, not S2 important), understimulation, overstimulation, and otherwise finding it hard to focus on a one thing.
All of that sounds like normal, adaptive, forager behavior to me. Some of it became maladaptive during the farming era, but not especially, and now in the industrial era are less adaptive.
Thus I think ADHD suffers from the same issue as codependency does, in that if you start to describe the symptoms you quickly realize 90% of humanity has this "problem" and so I think we're doing ourselves a disservice by considering it a pathology because it fails to acknowledge that most of these mental habits are just what it's like to be a normal human and that its our conditions that are unusual and that we are struggling to function within.
I don't see this as cause to throw out modern industrial society, but rather that we need to think about ways to adapt our systems to better accommodate real humans rather than the idealized ones of high modernism.
On the ground level, yes, we may still need to do much to personally intervene against ADHD-like symptoms, just as we may need to do against our natural tendency towards codependency, but I think there's something being lost by even talking about it this way. Rather, we need to think of it as how do we cope with being humans engaged in systems that ask us to behave in unusual ways, and see the systems as the broken things, not ourselves. It's not that everyone has ADHD or codependency; rather, it's that our systems pathologize normal behavior because they are confused about what is typical.
Replies from: Dagoncomment by Gordon Seidoh Worley (gworley) · 2021-01-08T04:24:11.985Z · LW(p) · GW(p)
You're always doing your best
I like to say "you're always doing your best", especially as kind words to folks when they are feeling regret.
What do I mean by that, though? Certainly you can look back at what you did in any given situation and imagine having done something that would have had a better outcome.
What I mean is that, given the all conditions under which you take any action, you always did the best you could. After all, if you could have done something better given all the conditions you would have.
The key is that all the conditions include the entire history of the world up to the present moment, and so that necessarily includes your life history, the life history of others, the physical environment, your emotional state, how tired you were, how your brain works, etc.. The trick is that when you condition your actions so fully there's no room left for any counterfactuals, for you could have done nothing else!
As you might guess, I'm proposing a deterministic outlook on the world. I won't really argue that too much here, other than to say that if you look long and hard enough at free will it dissolves into an after-the-fact illusion contingent on how your brain compresses reality and models yourself and that this is robust to quantum effects since even if quantum effects result in random outcomes you nonetheless only ever find yourself in a single history where some particular thing happened regardless of how it happened.
The immediate corollary of all this is that you also are always doing your worst, only that doesn't land too well when someone feels regret.
I like this insight because, fully taken it, it dissolves regret. Not that you can't imagine having done better, propose things you might do differently in the future, and then try them to see what happens and maybe actually do better than you previously did. Rather, it dissolves regret because regret hinges on feeling as if a counterfactually could have really happened. Once you deeply believe that counterfactuals are not real, i.e. they are purely of the map and have no existence in the territory independent of the map, regret just has no way to come into existence.
This doesn't mean you can't still feel related emotions like remorse, especially if you realize you were negligent and had a responsibility to have done better but didn't, but that's different than clinging to a desire to have done something different; remorse is owning that you did something less than what you were capable of under the circumstances and might reasonably be asked to make amends.
So next time you feel regret, try reminding yourself it couldn't have gone any other way.
comment by Gordon Seidoh Worley (gworley) · 2020-07-27T17:46:49.114Z · LW(p) · GW(p)
I feel like something is screwy with the kerning on LW over the past few weeks. Like I keep seeing sentences that look like they are missing space between the period and the start of the next sentence but when I check closely they are not. For whatever reason this doesn't seem to show in the editor, only in the displayed text.
I think I've only noticed this with comments and short form, but maybe it's happening other places? Anyway, wanted to see if others are experiencing this and raise a flag for the LW team that a change they made may be behaving in unexpected ways.
Replies from: Benito, habryka4↑ comment by Ben Pace (Benito) · 2020-07-27T17:49:08.966Z · LW(p) · GW(p)
It is totally real and it's been this way over two months. It's an issue with Chrome, and I'm kinda boggled that Chrome doesn't jump on these issues, it's a big deal for readability.
↑ comment by habryka (habryka4) · 2020-07-27T18:26:44.616Z · LW(p) · GW(p)
Yep, it's a Chrome bug. It's kind of crazy.
comment by Gordon Seidoh Worley (gworley) · 2019-11-20T02:11:10.836Z · LW(p) · GW(p)
Story stats are my favorite feature of Medium. Let me tell you why.
I write primarily to impact others. Although I sometimes choose to do very little work to make myself understandable to anyone who is more than a few inferential steps behind me and then write out on a far frontier of thought, nonetheless my purpose remains sharing my ideas with others. If it weren't for that, I wouldn't bother to write much at all, and certainly not in the same way as I do when writing for others. Thus I care instrumentally a lot about being able to assess if I am having the desired impact so that I can improve in ways that might help serve my purposes.
LessWrong provides some good, high detail clues about impact: votes and comments. Comments on LW are great, and definitely better in quality and depth of engagement than what I find other places. Votes are also relatively useful here, caveat the weaknesses of LW voting I've talked about before. If I post something on LW and it gets lots of votes (up or down) or lots of comments, relative to what other posts receive, then I'm confident people have read what I wrote and I impacted them in some way, whether or not it was in the way I had hoped.
That's basically where story stats stop on LessWrong. Here's a screen shot of the info I get from Medium:
For each story you can see a few things here: views, reads, read ratio, and fans, which is basically likes. I also get an email every week telling me about the largest updates to my story stats, like how many additional views, reads, and fans a story had in the last week.
If I click the little "Details" link under a story name I get more stats: average read time, referral sources, internal vs. external views (external views are views on RSS, etc.), and even a list of "interests" associated with readers who read my story.All of this is great. Each week I get a little positive reward letting me know what I did that worked, what didn't, and most importantly to me, how much people are engaging with things I wrote.
I get some of that here on LessWrong, but not all of it. Although I've bootstrapped myself now to a point where I'll keep writing even absent these motivational queues, I still find this info useful for understanding what things I wrote that people liked best or found most useful and what they found least useful. Some of that is mirrored here by things like votes, but it doesn't capture all of it.
I think it would be pretty cool if I could see more stats about my posts on LessWrong similar to what I get on Medium, especially view and read counts (knowing that "reads" is a ultimately a guess based on some users allowing Javascript that lets us guess that they read it).
Replies from: Ruby↑ comment by Ruby · 2019-11-21T03:09:53.739Z · LW(p) · GW(p)
Very quick thought: basically the reasons we haven't and might not do more in this direction is how it might alter what gets written. It doesn't seem good if people were to start writing more heavily for engagement metrics. Also not clear to me that engagement metrics capture the true value that matters of intellectual contributions.
Replies from: Raemon↑ comment by Raemon · 2019-11-21T03:12:53.870Z · LW(p) · GW(p)
(Habryka has an old comment somewhere delving into this, which I couldn't find. But the basic gist was "the entire rest of the internet is optimizing directly for eyeballs, and it seemed good for LessWrong to be a place trying to have a different set of incentives")