Posts

The Problem of the Criterion 2021-01-21T15:05:41.659Z
The Teleological Mechanism 2021-01-19T23:58:54.496Z
Zen and Rationality: Karma 2021-01-12T20:56:57.475Z
You are Dissociating (probably) 2021-01-04T14:37:02.207Z
A Model of Ontological Development 2020-12-31T01:55:58.654Z
Zen and Rationality: Skillful Means 2020-11-21T02:38:09.405Z
No Causation without Reification 2020-10-23T20:28:51.831Z
The whirlpool of reality 2020-09-27T02:36:34.276Z
Zen and Rationality: Just This Is It 2020-09-20T22:31:56.338Z
Zen and Rationality: Map and Territory 2020-09-12T00:45:40.323Z
How much can surgical masks help with wildfire smoke? 2020-08-21T15:46:12.914Z
Bayesiance (Filk) 2020-08-18T16:30:00.753Z
Zen and Rationality: Trust in Mind 2020-08-11T20:23:34.434Z
Zen and Rationality: Don't Know Mind 2020-08-06T04:33:54.192Z
Let Your Mind Be Not Fixed 2020-07-31T17:54:43.247Z
[Preprint] The Computational Limits of Deep Learning 2020-07-21T21:25:56.989Z
Comparing AI Alignment Approaches to Minimize False Positive Risk 2020-06-30T19:34:57.220Z
What are the high-level approaches to AI alignment? 2020-06-16T17:10:32.467Z
Pragmatism and Completeness 2020-06-12T16:34:57.691Z
The Mechanistic and Normative Structure of Agency 2020-05-18T16:03:35.485Z
What is the subjective experience of free will for agents? 2020-04-02T15:53:38.992Z
Deconfusing Human Values Research Agenda v1 2020-03-23T16:25:27.785Z
Robustness to fundamental uncertainty in AGI alignment 2020-03-03T23:35:30.283Z
Big Yellow Tractor (Filk) 2020-02-18T18:43:09.133Z
Artificial Intelligence, Values and Alignment 2020-01-30T19:48:59.002Z
Towards deconfusing values 2020-01-29T19:28:08.200Z
Normalization of Deviance 2020-01-02T22:58:41.716Z
What spiritual experiences have you had? 2019-12-27T03:41:26.130Z
Values, Valence, and Alignment 2019-12-05T21:06:33.103Z
Doxa, Episteme, and Gnosis Revisited 2019-11-20T19:35:39.204Z
The new dot com bubble is here: it’s called online advertising 2019-11-18T22:05:27.813Z
Fluid Decision Making 2019-11-18T18:39:57.878Z
Internalizing Existentialism 2019-11-18T18:37:18.606Z
A Foundation for The Multipart Psyche 2019-11-18T18:33:20.925Z
In Defense of Kegan 2019-11-18T18:27:37.237Z
Why does the mind wander? 2019-10-18T21:34:26.074Z
What's your big idea? 2019-10-18T15:47:07.389Z
Reposting previously linked content on LW 2019-10-18T01:24:45.052Z
TAISU 2019 Field Report 2019-10-15T01:09:07.884Z
Minimization of prediction error as a foundation for human values in AI alignment 2019-10-09T18:23:41.632Z
Elimination of Bias in Introspection: Methodological Advances, Refinements, and Recommendations 2019-09-30T20:23:13.139Z
Connectome-specific harmonic waves and meditation 2019-09-30T18:08:45.403Z
Goodhart's Curse and Limitations on AI Alignment 2019-08-19T07:57:01.143Z
G Gordon Worley III's Shortform 2019-08-06T20:10:27.796Z
Scope Insensitivity Judo 2019-07-19T17:33:27.716Z
Robust Artificial Intelligence and Robust Human Organizations 2019-07-17T02:27:38.721Z
Whence decision exhaustion? 2019-06-28T20:41:47.987Z
Let Values Drift 2019-06-20T20:45:36.618Z
Say Wrong Things 2019-05-24T22:11:35.227Z
Boo votes, Yay NPS 2019-05-14T19:07:52.432Z

Comments

Comment by gworley on Technological stagnation: Why I came around · 2021-01-26T04:11:03.733Z · LW · GW

I speculate that there may be anthropic reasons for some of the stagnation.

In particular, I suspect the slowdown in growth of energy production may be what we would expect to see because in worlds where energy production grew faster we would have also had more abundant and more powerful weapons that would have been more likely to kill more humans, thus making it less likely to find oneself living in a world where such growth didn't decrease.

Comment by gworley on Is Immortality Ethical? · 2021-01-25T19:34:06.765Z · LW · GW

This seems framed strangely to me. This seems mostly an issue we might face, say, in the far future where there is no more frontier, all available energy is being used to do something, and now we have to decide on tradeoffs between existing beings vs. possible future ones (or also on the relative size of beings or how much "cpu time" they get relative to "wall time"). As it stands today, this seems a non-issue since we are not at risk of preventing future people from existing by extending lives today (unless you have a compelling argument that this is the case).

Comment by gworley on On clinging · 2021-01-25T14:59:08.209Z · LW · GW

The example most salient to me here is Buddhism. The Buddha, famously, thought that suffering originates in something like “thirst,” “greed,” “craving,” “desire,” or “attachment.” If you let go of that thing, you let go of suffering, and attain perfect peace.

So, I think it's worth pointing out that this is a fairly surface-level understanding of dukkha. It's not a bad place to start, but also thinking that this is the extent of the thing it is about will result in confusion about Buddhist practice.

A deeper understanding is that dukkha is a fundamental motion of being a living thing. At its heart, I'd describe it as the continual pressure to optimize for something within a negative feedback loop, and so your idea of caring is definitely still in the realm of the origin of dukkha.

The only way to actually free yourself from it, then, is to stop the feedback, and this is at the far end where Buddhist practices end up: training techniques to shut down feedback within the brain so it only feeds forward.

I'd then describe the idea of liberation from dukkha as two fold. One was is to shut down feedback all together in meditation. The other is to shut down the secondary feedback cycle that creates meta-dukkha: suffering about suffering. The latter takes one a long way down the path, but then it's even further to the former.

Caveat: I'm not a Buddhist scholar or a teacher, just a Zen practitioner who is also a member of the LessWrong/Rationalist community, so any confusion here on the finer points is my own.

Comment by gworley on The Problem of the Criterion · 2021-01-25T02:43:33.751Z · LW · GW

Right, and this gets at why Chisholm argues there are only really two positions—particularism and skepticism/nihilism—because coherentism still requires one know the idea of coherentism to start, although to be fair coherentism would theoretically allow one to stop believing in coherentism if believing in it ceased to be coherent!

Comment by gworley on The Problem of the Criterion · 2021-01-25T02:40:32.302Z · LW · GW

But "it is true that nothing is true" s self defeating

Correct; this is another way in which skepticism begs the question.

If the problem of the criterion means that nothing is well justified, then strong claims should be avoided , including strong negative claims like "nothing is true". So scepticism done right is moderation in all things.

Right, the kind of "skepticism" I'm talking about here is different from everyday skepticism which is more like reserving judgement until one learns more. Skepticism here is meant to point to a position you might also call "nihilism", but that's not the term Chisholm uses and I stuck with his terminology around this.

Comment by gworley on Raemon's Shortform · 2021-01-23T22:55:30.548Z · LW · GW

I like "letting go" better because to me "grieving" is placing some frame around the kind of letting go being done. When I think of grieving I think of the process of dealing with the death of a loved one. But I let go of things all the time without grieving, or because I already did all the grieving a long time ago for a whole category of thing and so now I just let things go because I never was really holding on to them—they were just resting within my grasp.

Comment by gworley on The Problem of the Criterion · 2021-01-23T01:49:14.245Z · LW · GW

(Alright, breaking my own rule; we'll see how it goes.)

Eliezer already wrote a post on this, so no one should bother writing anything about the same topic that does not add anything new to Eliezer’s post, or do anything to result in changing one’s conclusion from that described in Eliezer’s post. This post does not (indeed, it would be quite difficult to write anything that does).

I guess I don't have much to say to this, as I disagree with your judgement here that there's no value in saying the same things in a new way even if nothing new is added, although I also disagree of course that I don't add anything. Sometimes saying the same thing in a different way clicks for someone when it didn't click for someone else because not everyone has literally the same mind. Cf. Anna Salamon on learning soft skills and the project of distillation.

My point is precisely that conundrums of epistemology, and epistemological questions in general, are motivated by pragmatic things or else by nothing.

This is literally also one of my main points, so I guess we at least agree on something.

I do not think there is any “problem of the criterion”, except in a way such as is already addressed to the maximum possible degree of satisfaction by the linked Sequence post. Hence no “conundrum” exists.

Sure, but I'm not satisfied with his post, hence I think there is more to say, though you obviously disagree.

Thus I'm left struggling to figure out a charitable motivation for your comments. That is, I'm not sure what point you are trying to make here other than you are annoyed I wrote something you didn't want to read since I already agree with pragmatism, linked and highlighted your preferred Yudkowsky article on this topic, and we otherwise agree that there is something going on to be addressed regarding grounding problems.

Comment by gworley on Exercise: Taboo "Should" · 2021-01-23T01:17:41.128Z · LW · GW

My reaction to this post is something like "yes, this is a good practice", but I've done it ,and it pushed me out the other side to believe I can say nothing at all without saying some kind of "should" because if you taboo should enough you run into grounding problems.

Cf. no free lunch in value learning

This is only to add nuance to your point, though, and I think the practice is worthwhile, because until you've done it you likely have a lot of big "should"s in your beliefs gumming up the works. I just think it's worth pointing out that the category of motivated reasons can't be made to disappear from thought without giving up thinking all together even if they can be examined and significantly reduced and the right starting advice is just to try to remove them all.

Comment by gworley on The Problem of the Criterion · 2021-01-22T23:49:00.562Z · LW · GW

It does not seem to me as if there remains anything left to say.

You're right, in the end it's best to say nothing, for then there are no problems. Alas, we suffer and strive and want things to be other than they already are, and so we say something and get ourselves into a world of trouble.

But we needn’t go that far; Eliezer has already addressed the question of “recursive justification”.

I'm not sure what your argument is here? Eliezer already wrote a post on this so no one should bother writing anything about the same topic ever again?

HALT! Proceed no further. Before even attempting to answer this question, ask: why the heck do you care? Why do you want to know the answer to this strange question, “what is a sandwich”? What do you plan to do with the answer?

In the absence of any purpose to that initial question, the rest of that entire section of the post is unmotivated. The sandwich alignment chart? Pointless and meaningless. Attempting to precisely define a “sandwich-like object”? Total waste of time. And so on.

On the other hand, if you do have a purpose in mind, then the right answer to “what is a sandwich” depends on that purpose. And the way you would judge whether you got the right answer or not, is by whether, having acquired said answer, you were then able to use the answer to accomplish that purpose.

As best I can tell you are making an argument here for pragmatism but want to skip talking about epistemology. I won't begrudge your right not to care about epistemology, only the point of this post was to explore a conundrum of epistemology, so I'm not quite sure what you're trying to say here other than you agree with pragmatism about the problem of the criterion but don't want to talk about epistemology.

(As a reminder, my policy of only replying to you once in a thread on my posts remains in place, although I remain open to reversing that decision.)

Comment by gworley on The Problem of the Criterion · 2021-01-22T23:30:25.235Z · LW · GW

This comment describes a response that sounds exactly like pragmatism to me, so I'm not sure what the distinction you're trying to make here is.

Also, as Matt already pointed out, you must have some criterion by which you criticize your beliefs else you literally could not make any distinction whatsoever, so then the problem just becomes one of addressing how to ground that, perhaps by accepting it on faith.

Trying to anticipate where the confusion between us is, it might help to say that taking something on faith need not mean it remain fixed forever. You can make some initial assumption to get started and then change your mind about it later (that's fundamental to coherentist approaches).

Comment by gworley on The Problem of the Criterion · 2021-01-22T14:34:43.006Z · LW · GW

See Matt's comment, but CR and PCR sound like coherentist or methodist responses.

Comment by gworley on The Problem of the Criterion · 2021-01-22T14:32:11.859Z · LW · GW

Right! This is the slippery part about the problem of the criterion: literally any way you try to address it requires knowing something, specifically knowing the way you try to address it. It's in this way that nearly every response to it could be argued to be a special case of particularism, since if nothing else you are claiming to know something about how to respond to the problem itself!

Comment by gworley on The Teleological Mechanism · 2021-01-22T04:04:46.740Z · LW · GW

Is the argument you're making that there are localized areas of space that possess lower levels of entropy, and that living systems can be generalized as simply lowering the entropy in an area of space?

Yes, that's part of it. As to what is "local", that's a question of where we draw boundaries, and in fact this gets at an issue fundamental to all of this, which is we have to have some way to even draw these boundaries. That's a free variable that has to be fixed, ironically, by telos.

Comment by gworley on The Problem of the Criterion · 2021-01-22T01:56:57.296Z · LW · GW

Reading this, my guess is that you underestimate the importance of being pragmatic to a single purpose.

When you are pragmatic, you are pragmatic with respect to some purpose. Let's suppose that purpose is truth. You'll get by quite fine in the world if you do that.

Yet, there's some things you'll miss out on. For example, suppose you want to know what it is like to be intentionally deceitful, perhaps because you are dealing with a person who lies and would like to understand what it is like to be them. To fully do that you have to think the same sort of thoughts a deceitful person would and that would require knowing, even if only temporarily, something not maximally predictive of the world to the best of your ability. Thus you must be able to think with the purpose of deceiving others about something in order to embody such thought long enough to get some first hand experience with thinking that way.

I think this generalizes to deeply understanding what it is like to be someone who isn't you. In fact, I'll go out on a limb and say the typical mind fallacy exists because we are especially bad at considering the problem of the criterion and thinking thoughts as if we were serving someone else's purpose rather than our own.

Comment by gworley on Meditations on faith · 2021-01-22T01:48:00.279Z · LW · GW

Alas, I can't cram a full understanding of the world into just a few short sentences, and the moment I say anything at all I necessarily say something wrong.

Yes, what I present above emphasizes a particular aspect of our existence. As you say, how I feel about things and how I want things to be are real processes in the universe that have downstream effects. It's also true I can't make a cake appear before me by closing my eyes and wishing really hard that I had a cake.

Comment by gworley on The Teleological Mechanism · 2021-01-22T01:39:41.288Z · LW · GW

This is asking for quite a lot of answer, but also mostly seems beside the point of this post.

If the question is just about the terms considered, "active" would seem to clearly be about how much the energy in the system is being fed to it vs. being generated by it. "Purpose", as I propose, is about thermodynamics and whether entropy is increasing or decreasing. "Teleology" is about feedback, which I wrote about here.

But I'm not sure what your real objection is here since nothing I wrote contradicts the idea that the world is just made up of stuff and nothing is possessed of any essential nature. Yet there is in fact some point at which the world takes on enough local structure that we place it in some category, otherwise we wouldn't be able to use words at all.

So I'm not sure how to read your objection other than you don't like the word "purpose"?

Comment by gworley on Meditations on faith · 2021-01-21T14:27:47.930Z · LW · GW

Of course they do, but those effects don't happen by simply believing they will happen. Believing something doesn't change the world; acting does.

Comment by gworley on The Teleological Mechanism · 2021-01-21T14:26:09.087Z · LW · GW

Well, the categories of this post suggest one way we might do it that's more satisfying than the naive way we draw the boundaries of "life".

We could equate life we active systems.

We could equate life with active, purposeful systems.

We could equate life with active, purposeful, teleological systems.

The test would then be to see which one is most useful to us if we use that to mean "life". Are we happy with the kinds of things that end up in the category? Does it seem natural? Or are these categories cutting at something orthogonal to what we mean by "life" and we would actually prefer to define it some other way.

I suspect the answer is that by "life" we mean something orthogonal to this classification system such that things we consider alive cut across the boundaries it draws.

Comment by gworley on A highly abusive LW moderator endangered dozens of people with COVID dishonesty · 2021-01-20T21:47:42.386Z · LW · GW

Please stop idolizing people

On this point I can at least agree.

Comment by gworley on Literature Review on Goal-Directedness · 2021-01-19T23:26:22.913Z · LW · GW

Okay, so here's a more adequate follow up.

In this seminal cybernetics essay a way of thinking about this is layed out.

First, they consider systems that have observable behavior, i.e. systems that take inputs and produce outputs. Such systems can be either active, in that the system itself is the source of energy that produces the outputs, or passive, in that some outside source supplies the energy to power the mechanism. Compare an active plant or animal to something passive like a rock, though obviously whether or not something is active or passive depends a lot on where you draw the boundaries of its inside vs. its outside.

Active behavior is subdivided into two classes: purposeful and purposeless. They say that purposeful behavior is that which can be interpreted as directed to attaining a goal; purposeless behavior does not. They spend some time in the paper defending the idea of purposefulness, and I think it doesn't go well. So I'd instead propose we think of these terms differently; I prefer to think of purposeful behavior as that which creates a reduction in entropy within the system and its outputs and purposeless behavior does not.

They then go on to divide purposeful behavior into teleological and non-teleological behavior, by which they simply mean behavior that's the result of feedback (and they specify negative feedback) or not. In LessWrong terms, I'd say this is like the difference between optimizers ("fitness maximizers") and adaptation executors.

They then go on to make a few additional distinctions that are not relevant to the present topic although do have some relevance to AI alignment relating to predictability of systems.

I'd say then that systems with active, purposeful, teleological behavior are the ones that "care", and the teleological mechanism is the aspect of the system by which a system is made to care.

Comment by gworley on Literature Review on Goal-Directedness · 2021-01-19T17:23:57.399Z · LW · GW

Doing a little digging, I realized that the idea of "teleological mechanism" from cybernetics is probably a better handle for the idea and will provide a more accessible presentation of the idea. Some decent references:

https://www.jstor.org/stable/184878

https://www.jstor.org/stable/2103479

https://nyaspubs.onlinelibrary.wiley.com/toc/17496632/50/4

I don't know of anywhere that presents the idea quite how I think of it, though. If you read Dreyfus on Heidegger you might manage to pick this out. Similarly I think this idea underlies Sartre's talk about freedom, but I can't recall that he explicitly makes the connection in the way I would. To the best of my knowledge philosophers have unfortunately not said enough about this topic because it's omnipresent in humans and often something that comes up incidentally to considering other things but not something deeply explored for its own sake except when people are confused (cg. Hegel on teleology).

Comment by gworley on Literature Review on Goal-Directedness · 2021-01-19T03:14:50.729Z · LW · GW

Reading this, I'm realizing again something I may have realized before and forgotten, but I think ideas about goal-directedness in AI have a lot of overlap with the philosophical topic of telos and Heideggerian care/concern.

The way I think about this is that ontological beings (that is, any process we can identify as producing information) have some ability to optimize (because information is produced by feedback) and must optimize for something rather than nothing (else they are not optimizers) or everything (in which case they are not finite, which they must be in our world). Thus we should expect anything we might think of as optimizing will have something it cares about, where here "caring about" is not the self-reflective way humans may knowingly care for something but in the implicit way that acts demonstrate care for something.

That something might not be very crisp and hard to specify or might be incoherent (or at least, incoherent when not conditioned on the entire state of the world), hence we might not be able to line it up perfectly with a notion like a utility function, although we could say a utility function is an attempt to represent the concern of an optimization process.

That optimization processes must care about something is, as I think underlies some of the discussion around Dennette's position though not discussed here, similar to the way that the intentionality of thought/belief means that thoughts must be about something.

Comment by gworley on Meditations on faith · 2021-01-19T01:59:41.046Z · LW · GW

I had a great deal of difficulty understanding faith, trust, and surrender as you talk about them here for a long time. My trust was always conditional, and it seemed obvious to me that it should be that way. It wasn't until after a lot of meditation I realized there was a kind of unconditional trust I could have: trust in the world to be just as it is. That is, no matter what I believe about the world, it will always be just the way it is regardless of how I feel about it or how I want it to be.

I feel like I keep having this same insight over and over again, deepening my understanding of it each time. I wrote a bit about my current understanding of it a while ago, but I expect I'll develop a yet deeper, more subtle appreciation for this type of trust in the world to be just as it is in the future.

Comment by gworley on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2021-01-15T21:52:29.664Z · LW · GW

On the other hand this post for me exemplifies something I think LessWrong is really good at, which is creating a place where people can find an audience of bleeding-edge research that is not obviously off the rails. Something like the kind of stuff you would otherwise only hear about because you work at a university and happen to go to a talk a person working on that research gave internally to solicit feedback.

Like with many posts the audience may be small for this one, but this is the same problem with many AI alignment posts and I don't think we should hold it against this post in voting unless we plan to also vote against inclusion of most technical AI posts that were nominated.

Comment by gworley on What’s good about haikus? · 2021-01-13T20:32:36.219Z · LW · GW

A well-loved haiku from Basho:

In Kyoto,

hearing the cuckoo,

I long for Kyoto

Comment by gworley on G Gordon Worley III's Shortform · 2021-01-11T22:07:55.029Z · LW · GW

ADHD Expansionism

I'm not sure I fully endorse this idea, hence short form, but it's rattling around inside my head and maybe we can talk about it?

I feel like there's a kind of ADHD (or ADD) expansionism happening, where people are identifying all kinds of things as symptoms of ADHD, especially subclinical ADHD.

On the one had this seems good in the sense that performing this kind of expansionism seems to actually be helping people by giving them permission to be the way they are via a diagnosis and giving them strategies they can try to live their life better.

On the other I feel like it's terrible in terms of actually diagnosing ADHD. It might help to explain why I think that.

Much of what I see that I'm terming ADHD expansionism looks to me like taking normal human behavior that is ill fitted to the modern environment and then pathologizing it. As best I can tell, it's normal and adaptive for humans to exhibit various behaviors that that get labeled as ADHD symptoms, like flittering between multiple activities, hyperfocus on things the mind finds important but doesn't necessarily endorse as important (S1 important things, not S2 important), understimulation, overstimulation, and otherwise finding it hard to focus on a one thing.

All of that sounds like normal, adaptive, forager behavior to me. Some of it became maladaptive during the farming era, but not especially, and now in the industrial era are less adaptive.

Thus I think ADHD suffers from the same issue as codependency does, in that if you start to describe the symptoms you quickly realize 90% of humanity has this "problem" and so I think we're doing ourselves a disservice by considering it a pathology because it fails to acknowledge that most of these mental habits are just what it's like to be a normal human and that its our conditions that are unusual and that we are struggling to function within.

I don't see this as cause to throw out modern industrial society, but rather that we need to think about ways to adapt our systems to better accommodate real humans rather than the idealized ones of high modernism.

On the ground level, yes, we may still need to do much to personally intervene against ADHD-like symptoms, just as we may need to do against our natural tendency towards codependency, but I think there's something being lost by even talking about it this way. Rather, we need to think of it as how do we cope with being humans engaged in systems that ask us to behave in unusual ways, and see the systems as the broken things, not ourselves. It's not that everyone has ADHD or codependency; rather, it's that our systems pathologize normal behavior because they are confused about what is typical.

Comment by gworley on Fittingness: Rational success in concept formation · 2021-01-11T21:42:33.657Z · LW · GW

Oh, but then why have a special word for success to the purpose of rational inquiry? To my ear "fitness" seems like something general we could say about anything, as in it's "fitness for X", like "fitness for rational inquiry" or "fitness for convincing others".

Comment by gworley on Fittingness: Rational success in concept formation · 2021-01-11T04:29:26.866Z · LW · GW

I agree, but to me this idea is already captured by the concept we variously call telos, purpose, care, or concern. That is, fitness as you described it seems to be a natural consequence of truth necessarily being in the service of some concern, this making truth ready at hand. In this way fitness is something like the extent to which a category serves the purpose its discerner intended.

Comment by gworley on Science in a High-Dimensional World · 2021-01-10T18:21:14.944Z · LW · GW

This post gave me an idea about how you might approach magic in fiction while keeping it ground in reality: something like magic users are people who learn to pick out relevant variables from the noise to consistently nudge reality in ways that otherwise seem not possible.

Basically placebomancy from Unsong.

Comment by gworley on Condition-directedness · 2021-01-08T19:14:50.563Z · LW · GW

I generally think in terms of a planning horizon that doesn't extend out much past 18 months. Typically things change fast enough that planning out past that far towards anything specific is not worthwhile, besides a few exceptions, and even those exceptions I can think of involves short-term actions that can't be well planned out far in advance.

Generally I'm just thinking about how to get through the next minute. There's some day-level and week-level planning along the lines of oh the weather will be nice this day and bad that day and I have more meetings this day than that day, and it's often necessary to make plans a few months in advance for things like vacations and conferences and to think about vaguely what I might like to be working on next, but often there's just not enough information to do more than try to point myself in vaguely the right directly.

I think the biggest driver of this for me is just being able to adjust to things that are uncertain now, including unknown uncertainty. Because I don't actually know what things will be like in a minute, a day, a week, a month, or a year from now, and my ability to predict what it will be like decreases in accuracy as time extends, I find it's mostly a bad strategy to try too hard to plan or set specific goals.

So instead I mostly have to make decisions based on what I might call virtue, i.e. does this seem like the kind of thing I would like to do and have done? Will I still feel that way about it later? Will it push me vaguely in the direction of things I would like, or will it work against that?

I'm not sure the world is certain enough to do much more.

Comment by gworley on G Gordon Worley III's Shortform · 2021-01-08T04:24:11.985Z · LW · GW

You're always doing your best

I like to say "you're always doing your best", especially as kind words to folks when they are feeling regret.

What do I mean by that, though? Certainly you can look back at what you did in any given situation and imagine having done something that would have had a better outcome.

What I mean is that, given the all conditions under which you take any action, you always did the best you could. After all, if you could have done something better given all the conditions you would have.

The key is that all the conditions include the entire history of the world up to the present moment, and so that necessarily includes your life history, the life history of others, the physical environment, your emotional state, how tired you were, how your brain works, etc.. The trick is that when you condition your actions so fully there's no room left for any counterfactuals, for you could have done nothing else!

As you might guess, I'm proposing a deterministic outlook on the world. I won't really argue that too much here, other than to say that if you look long and hard enough at free will it dissolves into an after-the-fact illusion contingent on how your brain compresses reality and models yourself and that this is robust to quantum effects since even if quantum effects result in random outcomes you nonetheless only ever find yourself in a single history where some particular thing happened regardless of how it happened.

The immediate corollary of all this is that you also are always doing your worst, only that doesn't land too well when someone feels regret.

I like this insight because, fully taken it, it dissolves regret. Not that you can't imagine having done better, propose things you might do differently in the future, and then try them to see what happens and maybe actually do better than you previously did. Rather, it dissolves regret because regret hinges on feeling as if a counterfactually could have really happened. Once you deeply believe that counterfactuals are not real, i.e. they are purely of the map and have no existence in the territory independent of the map, regret just has no way to come into existence.

This doesn't mean you can't still feel related emotions like remorse, especially if you realize you were negligent and had a responsibility to have done better but didn't, but that's different than clinging to a desire to have done something different; remorse is owning that you did something less than what you were capable of under the circumstances and might reasonably be asked to make amends.

So next time you feel regret, try reminding yourself it couldn't have gone any other way.

Comment by gworley on You are Dissociating (probably) · 2021-01-07T20:02:59.445Z · LW · GW

Ah, I couldn't quite remember how I've seen the model described before. So rather than what I presented, I've seen it describes as "it -> I -> me" as the development of place that emotional response comes from, and this this impacts identity formation.

Comment by gworley on Cryonics signup guide #1: Overview · 2021-01-07T00:35:05.052Z · LW · GW

Oh wow, interesting. Yeah I get a guaranteed 3% return, so clearly they must be getting less than that now. I'm guessing in a product like I have it can only be backed by certain classes of investments that are not yielding enough return to sustain it right now.

Comment by gworley on Centrally planned war · 2021-01-06T19:55:48.165Z · LW · GW

The general rule is something like militaries fight to win the previous war.

However, technological and social changes cause tactics to change, and this changes how much central planning matters and how much more vs. less central planning is a help or hindrance.

So to me the interesting question is not why was WWII centrally planned (answer: people figured based on the Great War that's what would help them win) but is central planning being used efficiently to maximize outcomes across a complex multidimensional space?

Sometimes the answer is going to be that, say perhaps in the case of WWII, less central planning would have resulted in more deaths, and so the deaths due to central planning were an acceptable tradeoff.

Comment by gworley on Cryonics signup guide #1: Overview · 2021-01-06T19:15:48.628Z · LW · GW

Interesting.

Also for what it's worth my policy officially describes itself as: "Flexible Premium Adjustable Death Benefit Life Policy, Nonparticipating". This is from KCL, and apparently this is just insurance legalese for GUL, although maybe IUL would be described the same way and the difference is in the details of how the interest rates are calculated?

I seem to recall we went with this because it has the nice feature, like a IUL does, of being able to pay its own premiums if it performs well enough, and for the ability to take out low interest loans against it, although I think that's a dangerous idea for insurance meant to fund cryonics.

Comment by gworley on Cryonics signup guide #1: Overview · 2021-01-06T02:12:27.963Z · LW · GW

I'm curious why you chose an indexed universal life plan rather than a guaranteed universal life plan (and I guess rather than a variable universal life plan)?

I can't find it now but I remember being warned off IUL and VUL when I signed up because they can have some weird behaviors that can cause the insurance to have a negative payout if you aren't careful, whereas GUL is simple and safe (but expensive).

Comment by gworley on On Meaning and Machines · 2021-01-05T23:22:09.961Z · LW · GW

FWIW I would have liked this post more if it was easier to skim. The big walls of text made it hard for me to figure out if it was worth reading or not and harder to tell if I missed something important because I skimmed. Or maybe it would just help if it explicitly contained something like an abstract or a tl;dr.

Comment by gworley on Gunnar_Zarncke's Shortform · 2021-01-05T21:34:39.956Z · LW · GW

I read LW via /allPosts and they show up there for me. Not sure if that's the default or not since you can configure the feed, which I'm sure I've done some of but I can't remember what.

Comment by gworley on G Gordon Worley III's Shortform · 2021-01-05T20:26:32.528Z · LW · GW

This post suggests a feature idea for LessWrong to me:

https://www.lesswrong.com/posts/6Nuw7mLc6DjRY4mwa/the-national-defense-authorization-act-contains-ai

It would be pretty cool if, instead of a lot of comments that have an order determined by votes or time of posting it were instead possible to write a post that had part that could be commented on directly. So, for example, say the comments for a particular section could live straight in the section rather than down at the bottom. Could be an interesting way to deal with lots of comments on large, structured posts.

Comment by gworley on You are Dissociating (probably) · 2021-01-05T20:14:29.475Z · LW · GW

I'm confused about the relationship between dissociation and defusion. On the surface they sound like the same thing: getting a little distance from something; separating your sense of self from your feelings; etc. First-hand descriptions of dissociation and first-hand descriptions of some benefits of meditation have many similarities, with the exception that dissociation is described in negative terms.

Let me try to tease these apart.

Alternatively to how I presented it above, I might describe dissociation as not identifying the self perceived as object (the "me") with the self perceived as subject (the "I"), resulting in a feeling of "I am not me".

In this framing, fusion would be something like only perceiving the self as subject and not as object (i.e. "I just am this way"), and defusion would be noticing that there is also a self that can be perceived as object.

Defusion doesn't necessarily imply dissociation or its opposite (I don't think there's a standard term so I'll call this "unification"), but at least gets you to a place where dissociation is possible.

Overall I see this as fitting together into a bigger, developmental picture, which progresses something like this:

  1. I am (fusion)
  2. I am not me (defusion into dissocation)
  3. I am me (defused dissociation transformed into defused unification)

Personally, I think I've experienced mild dissociative states, but I've never felt really negative about it; they seem interesting, and sometimes helpful for dealing with stress.

That seems right. I think lots of people experience subclinical dissociation and it doesn't register as clearly negative, just weird, interesting, etc. Regarding stress, this matches the clinical literature, where more serious dissociation can be triggered by stressful events where it's often viewed as a coping mechanism for dealing with more stress the the mind-body knows what to do with (typical examples include abuse victims). On a subclinical level, it's a way to get some space from a situation a person isn't prepared to handle at the moment.

Is meditation really defusion practice, as Kaj suggests?

I think so, but see my model above where defusion initially leads to defused dissociation but that's not the end state, but it's better the the fused state where one can't even consider oneself as object.

Is defusion as beneficial as Kaj suggests?

I think so, but there's some argument to be made that everyone would be better off in terms of life satisfaction if we just weren't self-aware at all, e.g. we'd all be a lot happier if we were rocks, pointing in the opposite direction. Given that most people don't want to go in that direction, defusion helps with the suffering of fusion, but that's not the end of the story.

Is dissociation really as negative as people seem to think?

Are defusion and dissociation really the same thing? Or, what exactly are the differences and similarities?

I think I already addressed these two questions in the first part of my reply, but let me know if you still have questions.

Comment by gworley on Gunnar_Zarncke's Shortform · 2021-01-05T02:10:15.964Z · LW · GW

I think there's something here, but it's usually thought of the other way around, i.e. solving AI alignment implies solving human alignment, but the opposite is not necessarily true because humans are less general intelligences than AI.

Also, consider that your example of Taylorism is a case study in an alignment mechanism failing, in that it tried to align the org but failed in that it spawned the creation of a subagent (the union) that caused it to do something management might have considered worse than the loss of potential gains given up by not applying Taylorism.

Anyway, this is a topic that's come up a few times on LessWrong; I don't have links handy though but you should be able to find them via search.

Comment by gworley on Mistakes to want · 2021-01-01T19:20:29.297Z · LW · GW

I'd describe some of the additional distinction you are trying to make around innocence as a difference between neglect and true blamelessness.

Comment by gworley on Do You Want the Complexity in the Tools or in Their Usage? · 2020-12-31T17:48:18.677Z · LW · GW

Generally agree with the point about there being a question of where the complexity lies and how much it's in the tool or the interactions with the tool.

The hand tool example fell short for me, though, as thinking about my experience with hand tools, both powered and non-powered, I think of power tools as more complex to work with than non-powered ones. For example, I have to know a lot more about how a power drill works to use it safely and effectively than a traditional hand drill. A power drill represents a general increase in complexity for both the tool and the user, not a trade off. Of course, we like power drills because they give us more power: we can apply more force faster with less physical effort, so they act as force multipliers in a job and are worth the added general complexity.

Same applies to tools like stand mixers in the kitchen vs. spoons.

I think the clearest examples of the principal you're addressing show up in software, where the trade off is often explicitly made about where complexity will live. A classic example might be between building software that "just does it for me" vs. giving the user a bunch of simpler software they can use to do the same thing if they combine them in the right ways.

Comment by gworley on The map and territory of NFT art · 2020-12-31T02:29:47.231Z · LW · GW

To be fair, though, when dark room work as necessary to develop photos there actually was an original print generated from the negative that might have desirable features that would lead to it being copied rather than the original negative, though then again there was still that original negative to go after as the true "original".

Comment by gworley on mike_hawke's Shortform · 2020-12-31T02:12:12.155Z · LW · GW

This is also my experience of Twitter. Strangely it's not how I experience Facebook, which I like and find I feel better after checking in on my friends, but I know plenty of people seem to have the same reaction to Facebook you and I have to Twitter.

Comment by gworley on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2020-12-30T23:48:02.616Z · LW · GW

I found this post important to developing what's proved to be a useful model for me of thinking about neural annealing as a metaphor for how the brain operates in a variety of situations. In particular, I think it makes a lot of sense when thinking about what it is that meditation and psychedelics do to the brain, and consequently helps me think about how to use them as part of Zen practice.

One thing I like about this post is that it makes claims that should be verifiable via brain studies in that we should see things like brain wave patterns that correspond with annealing activity and then can follow up with people to see if after annealing they are changed. There's plenty of anecdotal evidence that this is true, but it points towards being able to construct an RCT to test the theory.

Comment by gworley on Stock market hints for 2021 from past crashes · 2020-12-30T20:28:01.907Z · LW · GW

Any thoughts on the possibility we're in a bubble fueled by money with nowhere to go? Fundamentals are not strong in a lot of sectors, but there's more money to invest than can be put to work, so the theory goes that it's created an artificial recovery that will collapse again once there's more useful places for money to go. Would probably not result in a dramatic crash but a slow decline over a month or two as capital gets reallocated more reasonably and is pulled out of overvalued instruments.

Comment by gworley on Morality as "Coordination", vs "Do-Gooding" · 2020-12-29T19:24:03.154Z · LW · GW

Hmm, you cross-posted to EA forum, so I guess I'll reply both places since each might be seen by different folks.

I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."

I think it's worth challenging the idea that this conflation is actually an issue with ethics.

Although it's true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market economies and prayer) and so things that are bad because they break coordination mechanisms or because they don't express compassion are not bad for exactly the same reasons, this need not mean there is not something deeper going on that ties them together.

I think this is why there tends to be a focus on meta-ethics among philosophers of ethics rather than directly trying to figure out what people should do, even when setting meta-ethical uncertainty aside. There's some notion of badness or undesireableness (and conversely goodness or desirableness) that powers both of these, and so they are both different expressions of this same underlying phenomenon. So we can reasonably ties these two approaches together by looking at this question of what makes something seem good or bad to us, and simply consider these different domains over which we consider how we make good or bad things happen.

As to what good and bad mean, well, that's a larger discussion. My best theory is that in humans it's rooted in prediction error plus some evolved affinities, but this is an ongoing place where folks are trying to figure out what good and bad mean beyond our intuitive sense that something is good or bad.

Comment by gworley on Dissolving the Problem of Induction · 2020-12-28T18:24:40.364Z · LW · GW

I feel like there's something slippery happening when you claim

Deutsch dissolved the Problem of Induction by pointing out that induction doesn't actually play a role in science. Science is a reverse-engineering exercise that doesn't rely on the assumption that "the future will be similar to the past".

When we understand the business of science as reverse-engineering a compressed model of the universe, I don't think its justification relies on a "loop through the meta level". Although, admittedly, it does rely on Occam's Razor.

I think at best you can say Deutsch dissolves the problem for the project of science, but this is not the same thing as dissolving the problem of induction, which is generally considered impossible because it exists because of the problem of the criterion, i.e. how can you know the criterion by which you know something is true if you don't first know some true thing. And although reducing the problem of induction to the problem of justifying Occam's razor is helpful, it just pushes the problem around, because at some point you still have issues where you've reduced things as far as you can and you still have some question of the form "but how do I really know this?". After all, I might ask about the proposed justification of Occam's razor something like "why probabilities?", and you better hope the answer is not some version of "because they are a simpler than alternatives".

This is not to say we can't get on with projects like science, only that there's a epistemological gap we have to cover over, as you note. The general solution to this is called "pragmatism" and the specific solution in epistemology to this particular problem of justifying anything is called "particularism" because you pick some particular statement(s) to claim as true and go forth on their unjustified assumption.

If that's not satisfying, epistemological nihilism is also an option if you don't want to have to take a leap of faith to make some unjustified assumptions (i.e. propose some axioms), but it's not a very useful position if you want to make distinctions about the world because it collapses them.

Comment by gworley on My favorite essays of life advice · 2020-12-27T01:47:14.334Z · LW · GW

I like the sentiment but I rarely go back to read things I already read. Instead I seek out new things that say similar things in different ways.

A great example of this in my life comes from Zen books. Most of them say the same thing (there's a half joke that there are only three dharma talks a teacher can give), but in different ways. Sometimes the way it's said and where I am connect, so it's proven for me a good strategy to keep hearing similar teaching in new ways.