Posts

Word Spaghetti 2024-10-23T05:39:20.105Z
Can UBI overcome inflation and rent seeking? 2024-08-01T00:13:51.693Z
Finding the Wisdom to Build Safe AI 2024-07-04T19:04:16.089Z
How was Less Online for you? 2024-06-03T17:10:33.766Z
Fundamental Uncertainty: Chapter 8 - When does fundamental uncertainty matter? 2024-04-26T18:10:26.517Z
Dangers of Closed-Loop AI 2024-03-22T23:52:22.010Z
On "Geeks, MOPs, and Sociopaths" 2024-01-19T21:04:48.525Z
A discussion of normative ethics 2024-01-09T23:29:11.467Z
Extrapolating from Five Words 2023-11-15T23:21:30.865Z
Fundamental Uncertainty: Chapter 1 - How can we know what's true? 2023-08-13T18:55:44.861Z
Physics is Ultimately Subjective 2023-07-14T22:19:01.151Z
Optimal Clothing 2023-05-31T01:00:37.541Z
How much do personal biases in risk assessment affect assessment of AI risks? 2023-05-03T06:12:57.001Z
Fundamental Uncertainty: Chapter 7 - Why is truth useful? 2023-04-30T16:48:58.312Z
Industrialization/Computerization Analogies 2023-03-27T16:34:21.659Z
Fundamental Uncertainty: Chapter 6 - How can we be certain about the truth? 2023-03-06T13:52:09.333Z
Feelings are Good, Actually 2023-02-21T02:38:11.793Z
How much is death a limit on knowledge accumulation? 2023-02-14T03:54:16.070Z
Acting Normal is Good, Actually 2023-02-10T23:35:41.043Z
Religion is Good, Actually 2023-02-09T06:34:12.601Z
Drugs are Sometimes Good, Actually 2023-02-08T02:24:24.152Z
Sex is Good, Actually 2023-02-05T06:33:26.027Z
Small Talk is Good, Actually 2023-02-04T00:38:21.935Z
Exercise is Good, Actually 2023-02-02T00:09:18.143Z
Nice Clothes are Good, Actually 2023-01-31T19:22:06.430Z
Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact 2023-01-19T01:15:09.693Z
MacArthur BART (Filk) 2023-01-02T22:50:04.248Z
Fundamental Uncertainty: Chapter 5 - How do we know what we know? 2022-12-28T01:28:50.605Z
[Fiction] Unspoken Stone 2022-12-20T05:11:23.231Z
The Categorical Imperative Obscures 2022-12-06T17:48:01.591Z
Contingency is not arbitrary 2022-10-12T04:35:07.407Z
Truth seeking is motivated cognition 2022-10-07T19:19:27.456Z
Quick Book Review: Crucial Conversations 2022-09-19T06:25:23.052Z
Keeping Time in Epoch Seconds 2022-09-10T00:28:08.137Z
Fundamental Uncertainty: Chapter 4 - Why don't we do what we think we should? 2022-08-29T19:25:16.917Z
Fundamental Uncertainty: Chapter 3 - Why don't we agree on what's right? 2022-06-25T17:50:37.565Z
Fundamental Uncertainty: Chapter 2 - Why do words have meaning? 2022-04-18T20:54:24.539Z
Modect Englich Cpelling Reformc 2022-04-16T23:38:50.212Z
Good Heart Donation Lottery Winner 2022-04-08T20:34:41.104Z
How I Got So Much GHT 2022-04-07T03:59:36.538Z
What are rationalists worst at? 2022-04-06T23:00:08.600Z
My Recollection of How This All Got Started 2022-04-06T03:22:48.988Z
You get one story detail 2022-04-05T04:38:36.022Z
Software Engineering: Getting Hired and Promoted 2022-04-04T22:31:52.967Z
My Superpower: OODA Loops 2022-04-04T01:51:46.622Z
How Real Moral Mazes (in Bay Area startups)? 2022-04-03T18:08:54.220Z
Becoming a Staff Engineer 2022-04-03T02:30:12.951Z
Good Heart Donation Lottery 2022-04-01T17:51:24.235Z
[David Chapman] Resisting or embracing meta-rationality 2022-02-27T21:46:23.912Z
Fundamental Uncertainty: Prelude 2022-02-06T02:26:49.707Z

Comments

Comment by Gordon Seidoh Worley (gworley) on What if muscle tension is sometimes signal jamming? · 2024-11-05T03:14:31.323Z · LW · GW

I don't know, but I can say that after a lot of hours of Alexander lessons my posture and movement improved in ways that would be described as "having less muscle tension" and this having less tension happened in conjunction with various sorts of opening and being more awake and moving closer to PNSE.

Comment by Gordon Seidoh Worley (gworley) on Death notes - 7 thoughts on death · 2024-10-29T03:25:36.676Z · LW · GW

Thank you for sharing your thoughts, and sorry for your losses. It's often hard to talk about death, especially about the deaths of those we love. I don't really have anything other to say than that I found this moving to read, and I'm glad you shared it with us.

Comment by Gordon Seidoh Worley (gworley) on somebody explain the word "epistemic" to me · 2024-10-28T17:16:29.344Z · LW · GW

Here's more answer than you probably wanted.

First up, the word "epistemic" solves a limitation of the word "knowledge" in that it doesn't easily turn into an adjective. Yes, like all nouns in English it can be used like an adjective in the creation of noun phrases, but "knowledge state" and "knowledge status" don't sound as good.

But more importantly there's a strong etymological reason to prefer the word "epistemic" in these cases. "Epistemic" comes from "episteme", one of Greek's words for knowledge[1]. Episteme is knowledge that is justified by observation and reason, and importantly is known because the knower was personally convinced of the justification, as opposed to gnosis, where the only justification is experience, or doxa, which is second-hand knowledge[2].

Thus "epistemic" carries with it the connotation of being related to justified beliefs. An "epistemic state" or "epistemic status" implies a state or status related to how justified one's beliefs are.

  1. ^

    "Knowledge" is cognate with another Greek word for knowledge, "gnosis", but the two words evolved along different paths from PIE *gno-, meaning "know".

  2. ^

    We call doxa "hearsay" in English, but because of that word's use in legal contexts, it carries some pejorative baggage related to how hearsay is treated in trials. To get around this we often avoid the word "hearsay" and instead focus on our level of trust in the person we learned something from, but won't make a clear distinction between hearsay and personally justified knowledge.

Comment by Gordon Seidoh Worley (gworley) on The hostile telepaths problem · 2024-10-27T20:30:49.232Z · LW · GW

I'm sure my allegiance to these United States was not created just by reciting the Pledge thousands of times. In fact, I resented the Pledge for a lot of my life, especially once I learned more about its history.

But if I'm honest with myself, I do feel something like strong support for the ideals of the United States, much stronger than would make sense if someone had convinced me as an adult that its founding principals were a good idea. The United States isn't just my home. I yearn for it to be great, to embody its values, and to persist, even as I disagree with many of the details of how we're implementing the dream of the founders today.

Why do I think the Pledge mattered? It helped me get the feeling right. Once I had positive feelings about the US, of course I wanted to actually like the US. I latched onto the part of it that resonates with me: the founding principals. Someone else might be attracted to something else, or maybe would even find they don't like the United States, but stay loyal to it because they have to.

I'm also drawing on my experience with other fake-it-until-you-make-it rituals. For example, I and many people really have come to feel more grateful for the things we have in life by explicitly acknowledge that gratitude. At the start it's fake: you're just saying words. But eventually those words start to carry meaning, and before long it's not fake. You find the gratitude that was already inside you and learn how to express it.

In the opening example, I bet something similar could work for getting kids to appologize. No need to check if they are really sorry, just make them say sorry. Eventually the sadness at having caused harm will become real and flow into the expression of it. It's like a kind of reverse training, where you create handles for latent behaviors to crystalize around, and by creating the right conditions when the ritual is performed, you stand a better-than-chance possibility of getting the desired association.

Comment by Gordon Seidoh Worley (gworley) on The hostile telepaths problem · 2024-10-27T18:48:05.584Z · LW · GW

Some cultures used to, and maybe still do, have a solution to the hostile telepaths problem you didn't list: perform rituals even if you don't mean them.

If a child breaks their mom's glasses, the mom doesn't care if they are really sorry or not. All she cares about is if they perform the sorry-I-broke-your-glasses ritual, whatever that looks like. That's all that's required.

The idea is that the meaning comes later. We have some non-central instances of this in Western culture. For example, most US school children recite the Pledge of Allegiance every day (or at least they used to). I can remember not fully understanding what the words meant until I was in middle school, but I just went along with it. And wouldn't you know it, it worked! I do have an allegiance to the United States as a concept.

The world used to be more full of these rituals and strategies for appeasing hostile telepaths, who just chose not to use their telepathy because everyone agreed it didn't matter so long as the rituals were performed. But the spread of Christianity and Islam has brought a demand for internalized control of behaviors to much of the world, and with it we get problems like shame and guilt.

Now I'm not saying that performing rituals even if you don't mean them is a good solution. There are a lot of tradeoffs to consider, and guilt and shame offer some societal benefits that enable higher trust between strangers. But it is an alternative solution, and one that, as my Pledge of Allegiance example suggests, does sometimes work.

Comment by Gordon Seidoh Worley (gworley) on Word Spaghetti · 2024-10-24T17:11:48.029Z · LW · GW

Many ideas are hard to fully express in words. Maybe no idea can be precisely and accurately captured. Something is always left out when we use our words.

What I think makes some people faster (and arguably better) writers is that they natively think in terms of communication with others, whereas I natively think in terms of world modeling, and then try to come up with words that explain the word model. They don't have to go through a complex thought process to figure out how to transmit their world model to others, because they just say thing that convey the messages that exist in their head, and those messages are generated based on their model of the world.

Comment by Gordon Seidoh Worley (gworley) on Word Spaghetti · 2024-10-24T17:04:06.087Z · LW · GW

Yep! In fact, an earlier draft of this post included a mention of Paul Graham, because he's a popular and well-liked example of someone who has a similar process to the one I use (though I don't know if he does it for the same reasons).

In that earlier draft, I contrasted Graham with Scott Alexander, who I vaguely recall mentioning that he basically sits down at his computer and a couple hours later a finish piece of writing has appeared. But I couldn't find a good reference of this being Scott's process, so maybe it's just a thing I talked with him about in person one time.

In the end I decided this was an unnecessary tangent for the body of the text, but I'm very glad to have a chance to talk about it in the comments! Thanks!

Comment by Gordon Seidoh Worley (gworley) on [Intuitive self-models] 6. Awakening / Enlightenment / PNSE · 2024-10-23T04:28:05.252Z · LW · GW

As of late July last year, "I" am in PNSE. A few comments.

First, no major errors or concerns when reading the post. I might have missed something, but nothing triggered the "this is misunderstanding what PNSE is fundamentally like" alarm.

Second, there's a lot of ways PNSE is explained. I like this short version: "I am me". That is, "I", the subject of experience, no longer experiences itself as subject, but rather as object, i.e. "me". It's like having a third-person experience of the self. I also like to describe it as thought becoming a sense, like vision or hearing, because "I" no longer do the thinking; instead this person does the thinking to me.

Third, not everyone describes it this way, but in Zen we call the transition into PNSE the Great Death because it literally feels like dying. It's not dissimilar from the ego death people experience on drugs like LSD, but ego "death" is better described as ego "sleep" because it comes back and, after it's happened once, the mind knows the ego is going to come back, whereas in the Great Death the sense of separate self is gone and not coming back. All that said, many with PNSE don't experience a violent transition like this, so the Great Death or something like it may be a contingent feature of some paths to PNSE and not others.

Fourth, I don't remember if the paper discusses this, and this is controversial among some Buddhist traditions, but PNSE doesn't mean the mind is totally liberated from belief in a separate self. You said the homunculus concept lies dormant, but I'd say it does more than that. The mind is filled with many beliefs that presupposed the existence of the homunculus, and even if the homunculus is no longer part of experiences of the world, it's still baked into habits of behavior, and it takes significant additional work once in PNSE to learn new habits to replace the old ones that don't have the homunculus baked into them. Very few people ever become free of all of them, and maybe literally no one does as long as they continue to live.

Fifth and finally, PNSE is great, I'm glad it's how I am now. It's also fine not to be in it, because even if you believe you have a homunculus, in an absolute sense you already don't, you're just confused about how the world works, and that's okay, we're all confused. PNSE is also confused, but in different ways, and with fewer layers of confusion. So if you read this post and are now excited to try for PNSE, great, do it, but be careful. Lots of people Goodhart on what they think PNSE is because they try too hard to get it. If PNSE doesn't sneak up on you, then be extra suspect of Goodharting! (Actually, just always be suspicious that you've Goodharted yourself!)

Comment by Gordon Seidoh Worley (gworley) on Information vs Assurance · 2024-10-20T23:48:47.673Z · LW · GW

The information/assurance split feels quite familiar to me as an engineering manager.

My work life revolves around projects, especially big projects that takes months to complete. Other parts of the business depend on when these projects will be done. In some cases, the entire company's growth plans may hinge on my team completing a project by a certain time. And so everyone wants as much assurance as possible about when projects will complete.

This makes it really hard to share information, because people are so hungry for assurance they will interpret almost any sharing of information as assurance. A typical conversation I used to have when I was naive to this fact:

Sales manager: Hey, Gordon, when do you think that project will be done?

Me: Oh, if things go according to plan, probably next month.

Sales manager: Cool, thanks for the update!

If the project ships next month, no problem. But as often happens in software engineering, if the project gets delayed, now the sales manager is upset:

Them: Hey, you said it would be ready next month. What gives?

Me: I said if things went according to plan, but there were surprises, so it took us longer than we initially though it would.

Them: Dammit. I sold a customer on the assumption that the project was shipping this month! What am I supposed to tell them now?

Me: I don't know, why did you do that? I was giving you an internal estimate, not a promise of delivery.

Them: You said this month. I'm tired of Engineering always having some excuse about why stuff is delayed.

What did I do wrong? I failed to understand that Sales, and most other functions in a software business, are so dependent and hungry for information from Engineering, that they saw the assurance they wanted to see rather than the information I was giving.

I've (mostly) learned my lesson. I have to carefully control how much I say to anyone not directly involved in the project, lest they get the wrong idea.

Someone: Hey, Gordon, when do you think that project will be done?

Me: We're working on it. We set a goal of having it complete by end of next quarter.

Do I actually expect it to take all the way to next quarter? No. Most likely it'll be done next month. But if anything unexpected happens, now I've given a promise I can keep.

This isn't exactly just "underpromise, overdeliver". That's part of it, but it's also about noticing when you're accidentally making a promise, even when you think you're not, even if you say really explicitly that you're not making a promise, someone will interpret as a promise and now you'll have to deal with that.

Comment by Gordon Seidoh Worley (gworley) on The Hopium Wars: the AGI Entente Delusion · 2024-10-14T18:30:48.881Z · LW · GW

I defined tool AI specifically as controllable, so AI without a quantitative guarantee that it's controllable (or "safe", as you write) wouldn't meet the safety standards and its release would be prohibited.

If your stated definition is really all you mean by tool AI, then you've defined tool AI in a very nonstandard way that will confuse your readers.

When most people hear "tool AI", I expect them to think of AI like hammers: tools they can use to help them achieve a goal, but aren't agentic and won't do anything on their own they weren't directly asked to do.

You seem to have adopted a definition of "tool AI" that actually means "controllable and goal-achieving AI", but give no consideration to agency, so I can only conclude from your writing that you would mean for AI agents to be included as tools, even if they operated independently, so long as they could be controlled in some sense (what sense control takes exactly you never specify). This is not what I expect most people to expect someone to mean by a "tool".

Again, I like all the reasoning about entente, but this use of the word "tool AI" is confusing, maybe even deceptive (I assume that was not the intent!). It also leaves me felling like your "solution" of tool AI is nothing other than a rebrand of what we've already been talking about in the field variously as safe, aligned, or controllable AI, which I guess is fine, but "tool AI" is a confusing name for that. This also further downgrades my opinion of the solution section, since as best I can tell it's just saying "build AI safely" without enough details to be actionable.

Comment by Gordon Seidoh Worley (gworley) on The Hopium Wars: the AGI Entente Delusion · 2024-10-13T18:39:05.665Z · LW · GW

What do you make of the extensive arguments that tool AI are not actually safer than other forms of AI, and only look that way on the surface by ignoring issues of instrumental convergence to power-seeking and the capacity for tool AI to do extensive harm even if under human control? (See the Tool AI page for links to many posts tackling this question from different angles.)

(Also, for what it's worth, I was with you until the Tool AI part. I would have liked this better if it had been split between one post arguing what's wrong with entente and one post arguing what to do instead.)

Comment by Gordon Seidoh Worley (gworley) on Values Are Real Like Harry Potter · 2024-10-10T07:19:44.538Z · LW · GW

I agree with the main claim of this post, mostly because I came to the same conclusion several years ago and have yet to have my mind changed away from it in the intervening time. If anything, I'm even more sure that values are after-the-fact reifications that attempt to describe why we behave the way we do.

Comment by Gordon Seidoh Worley (gworley) on [Intuitive self-models] 3. The Homunculus · 2024-10-04T23:01:41.829Z · LW · GW

Anyway, after a bit more effort, I found the better search term, hara, and lots of associated results that do seem to back up Johnstone’s claim (if I’m understanding them right—the descriptions I’ve found feel a bit cryptic). Note, however, that Johnstone was writing 45 years ago, and I have a vague impression that Japanese people below age ≈70 probably conceptualize themselves as being in the head—another victim of the ravages of global cultural homogenization, I suppose. If anyone knows more about this topic, please share in the comments!

I'm not Japanese, but I practice Zen, so I'm very familiar with the hara. I can't speak to what it would be like to have had the belief that my self was located in the hara, but I can talk about its role in Zen.

Zen famously, like all of Buddhism, says that there's no separate self, i.e. the homunculus isn't how our minds works. A common strating practice instruction in Zen is the meditate on the breath at the hara, which is often described as located about 2 inches inside the body from the bellybutton.

This 2 inch number assumes you're fairly thin, and it may not be that helpful a way to find the spot, anyway. I instead tell people to find it by feeling for where the very bottom of their diaphragm is. It feels like the lowest point in the body that activates to contract at the start of the breath, and is the lowest point in the body that relaxes when a breath finishes.

Some Zen teachers say that hara is where attention starts, as part of a broader theory that attention/awareness cycles with the breath. I wrote about this a bit previously in a book review. I don't know if that's literally true, but as a practice instruction it's effective to have people put their attention on the hara and observe their breathing. This attention on the breath at a fixed point can induce a pleasant trance state that often creates jhana, and longer term helps with the nervous system regulation training meditation performs.

It takes most people several hundred to a few thousand hours to be able to really stabilize their attention on the hara during meditation, although the basics of it can be grasped within a few dozen hours.

Comment by Gordon Seidoh Worley (gworley) on Eye contact is effortless when you’re no longer emotionally blocked on it · 2024-10-03T15:37:32.934Z · LW · GW

One practice we have done at times at my Zen center during sesshins is eye gazing practice. In it, you sit across from someone and just look into their eyes silently for several minutes while they do the same. That's it. Simple, but really effective way to feel into the nonseparate, embeddedness of living.

Comment by Gordon Seidoh Worley (gworley) on Information dark matter · 2024-10-02T00:28:23.973Z · LW · GW

This seems like a fine topic, but FYI I ended up giving it a downvote because I gave up reading part way through, starting skimming, and ironically most of what's in this post turned into information dark matter because I lost faith that I'd gain more from reading it than skimming. I'd have preferred a more condensed post.

Comment by Gordon Seidoh Worley (gworley) on A Path out of Insufficient Views · 2024-09-25T03:10:09.739Z · LW · GW

There's people who identify more with System 2. And they tend to believe truth is found via System 2 and that this is how problems are solved.

There's people who identify more with System 1. And they tend to believe truth is found via System 1 and that this is how problems are solved.

(And there are various combinations of both.)

 

I've been thinking about roughly this idea lately.

There's people who are better at achieving their goals using S2, and people who are better at achieving their goals using S1, and almost everyone is a mix of these two types of people, but are selectively one of these types of people in certain contexts and for certain goals. Identifying with S2 or S1 then comes from observing which tends to do a better job of getting you what you want, so it starts to feel like that's the one that's in control, and then your sense of self gets bound up with whatever mental experiences correlate with getting what you want.

For me this has shown up as being a person who is mostly better at getting what he wants with S2, but my S2 is unusually slow, so for lots of classes of problems it fails me in the moment even if it knew what to do after the fact. Most of my most important personal developments have come on the back of using S2 long enough to figure out all the details of something so that S1 can take it over. A gloss of this process might be to say I'm using intelligence to generate wisdom.

I get the sense that other people are not in this same position. There's a bunch of people for whom S2 is fast enough that they never face the problem I do, and they can just run S2 fast enough to figure stuff out in real time. And then there's a whole alien-to-me group of folks who are S1 first and think of S2 as this slightly painful part of themselves they can access when forced to, but would really rather not.

Comment by Gordon Seidoh Worley (gworley) on The Other Existential Crisis · 2024-09-22T02:43:06.722Z · LW · GW

What will I do when I grow up, if AI can do everything?

One interesting this about this question is that it comes from an implicit frame in which humans must do something to support their survival.

This is deeply ingrained in our biology and culture. As animals, we carry in us the well-worn drives to survive and reproduce, for which if we did not possess we not not exist because our ancestors would never have created the unbroken chain of billions of years that led to us. And with those drives comes the need to do something useful to those ends.

As humans, we are enmeshed in a culture that exists at the frontier of a long process of becoming ever better at working together to get better at surviving, because those cultures that did it better outcompeted those that were worse at it. And so we approach our entire lives with this question in our minds: what actions will I take that contribute to my survival and the survival of my society?

Transformative AI stands to break the survival frame, where the problem of our survival is put into the hands of beings more powerful than ourselves. And so then the question becomes, what do we do if we don't have to do anything to survive?

I imagine quite a lot of things! Consider what it is like to be a pet kept by humans. They have all their survival needs met for them. Some of them are so inexperienced at surviving that they'd probably die if their human caretakers disappeared, and others would make it but without the experience of years of caring for their own survival to make them experts at it. What do they do given they don't have to fight to survive? They live in luxury and happiness, if their caretakers love them and are skillful, or suffering and sorrow, if their caretakers don't or aren't.

So perhaps like a dog who lives to chase a ball or a cat who lives for napping in the sun, we will one day live to tell stories, to play games, or to simply enjoy the pleasures of being alive. Let us hope that's the world we manage to create!

Comment by Gordon Seidoh Worley (gworley) on I finally got ChatGPT to sound like me · 2024-09-17T17:53:37.065Z · LW · GW

Did you have to prompt it in any special ways to get it to do this?

I've tried this same experiment several times in the past because I have decades of writing that must be in the training set, but each time I didn't make progress because the fine tuning refused to recognize that I was a person it knew about and could make writing sound like, even though if prompted differently could give me back unique claims that I made in posts.

I've not tried again with the latest models. Maybe they'll do it now?

Comment by Gordon Seidoh Worley (gworley) on Head in the Cloud: Why an Upload of Your Mind is Not You · 2024-09-17T17:47:54.717Z · LW · GW

My high level take is that this essay is confused about what minds are and how computers actually work, and it ends up in weird places because of that. But that's not a very helpful argument to make with the author, so let me respond to two points that the conclusion seems to hinge on.

A mind upload does not encapsulate our brain’s evolving neuroplasticity and cannot be said to be an instantiation of a mind. 

This seems like a failure to imagine what types of emulations we could build to create a mind upload. Why is this not possible, rather than merely something that seems like a hard engineering problem to solve? As best I can tell, your argument is something like "computer programs are fragile and can't self heal", but this is also true of our bodies and brains for sufficient levels of damage, and most computer programs are fragile by design because they favor efficiency. Robust computer programs where you can could delete half of them and they'd still run are entirely possible to create. It's only a question of where resources are spent.

Likewise, it is not enough for a mind upload to behave in human-like ways for us to consider it sentient. It must have a physical, biological body, which it lacks by definition. 

This is nonsesnese. Uploads are still physically instantiated, just by different means. Your argument thus must hinge on the "biological body" claim, but you don't prove this point. To do so you'd need to provide an argument that there is something special about our bodies that cannot be successfully reproduced in a computer emulation even in theory.

It's quite reasonable to think current computers are not powerful enough to create a sufficiently detailed emulation to upload people today, but that does not itself preclude the development of future computers that are so capable. So you need an argument for why a computer of sufficient power to emulate a human body, including the brain, and an environment for it to live in is not possible at all, or would be impractical even with many orders of magnitude more compute (e.g. some problems can't be solved, even though it's theoretically possible, because they would require more compute than is physically possible to get out of the universe).


For what it's worth, you do hit on an important issue in mind uploading: minds are physically instantiated things that are embedded in the world, and attempts to uploads mind that ignore this aren't going to work. The mind is not even just the brain, it's a system that exists in conjunction with the whole body and the world it finds itself in such that it can't be entirely separated from it. But this is not necessarily a blocker to uploading minds. It's an engineering problem to be solved (or found to be unsolvable for some specific reasons), not a theoretical problem with uploads.

Comment by Gordon Seidoh Worley (gworley) on Forever Leaders · 2024-09-15T20:50:54.025Z · LW · GW

I was slightly tempted to downvote on the grounds that I don't want to see posts like this on LW, but the author is new so instead I'll leave this comment.

What I dislike about this post is that it's making an extremely obvious and long discussed observation. There's nothing wrong with new people having this old insight—in fact, having insights others have already had can be confirmation that your thinking is going in a useful direction—but I'm not excited to read about an idea that people have thought of since before I was born (e.g. Asimov's Foundation series arguably includes exactly this idea of what happens when a leader lives forever, for a slightly unusual definition of "lives").

My guess is that others feel the same and helps explain this post's lukewarm response.

I'd be more excited to read a post that explored some new angle on the idea.

Comment by Gordon Seidoh Worley (gworley) on Collapsing the Belief/Knowledge Distinction · 2024-09-12T16:22:11.888Z · LW · GW

You don't make clear what distinction between belief and knowledge you are arguing against, so I can't evaluate your claim that there's no distinction between them.

Comment by gworley on [deleted post] 2024-09-08T21:55:26.743Z

I'm curious, why write about Erikson? He's interesting from a historical perspective, but the field of developmental psychology has evolved a lot since then and has better models than Erikson did.

Comment by Gordon Seidoh Worley (gworley) on What Depression Is Like · 2024-08-29T06:23:08.049Z · LW · GW

it's not meant to be tricky or particularly difficult in any way, just tedious.

Tedium still doesn't land for me as a description of what depression is like. I avoid doing all kinds of tedious things as a non-depressed person because I value my time. For example, I find cooking tedious, so I use money to buy my way out of having to spend a lot of time preparing meals, but I'm not depressed in general or about food specifically.

Perhaps depression makes things feel tedious that otherwise would not because of a lack of motivation to do them. For example, I like sweeping the floor, but sweeping the floor would feel tedious if I didn't get satisfaction from having clean floors. I probably wouldn't like sweeping the floor if I were depressed and didn't care about the floors being clean.

Maybe I'm splitting hairs here, but it seems to me worth making a clear distinction between what it feels like to be depressed and what are common symptoms of depression. The lack of care seems to me like a good approximation of what it feels like; tediousness or puzzle solving seems more like a symptom that shows up for many people, but it not in itself what is like to be depressed, even if it is a frequent type of experience one has while depressed.

Comment by Gordon Seidoh Worley (gworley) on Why Large Bureaucratic Organizations? · 2024-08-29T06:11:34.279Z · LW · GW

I think there's something to what you say, but your model is woefully incomplete in ways that miss much of why large bureaucratic organizations exist.

  • Most organizations need to scale to a point where they will encouter principal-agent problems.
  • Dominance hierarchies offer a partial solution to principal-agent problems in that dominance can get agents to do what their principals want.
  • Dominance is not bad. Most people want to be at least partially dominated because by giving up some agency they get clear goals to accomplish in exchange, and that accomplishment gives them a sense of meaning.
    • Also they may care about the mission of the org but not know how to achieve its goals without someone telling them what to do.

Basically what I want to say is that dominance is instrumentally useful given human psychology and the goals of many organizations, and I think most organizations don't exist for the purpose of exerting dominance over other people except insofar as is necessary to achieve goals.

Comment by Gordon Seidoh Worley (gworley) on What Depression Is Like · 2024-08-28T22:46:20.886Z · LW · GW

I was depressed for most of my 20s. I can't say it felt anything like having to solve a puzzle to do things. It instead felt like I didn't care, lacked motivation, etc. Things weren't hard to do, I just didn't want to do them or think doing them would be worthwhile because I expected bad stuff to happen as a result of doing things instead of good stuff.

Your model also contradicts most models I'm aware of that describe depression, which fit more with my own experience of a lack of motivation or care or drive to do things.

To me it sounds like you're describing something that is comorbid with depression for you. I don't have ADHD, but what you're describing pattern matches to how I hear people with ADHD describe the experience of trying to make themselves do things: like most activities are like a puzzle in that they require lots of S2-type thinking to make them happen.

Comment by Gordon Seidoh Worley (gworley) on How I started believing religion might actually matter for rationality and moral philosophy · 2024-08-24T20:39:47.129Z · LW · GW

Sure. I'll do my best to give some more details. This is all from memory, and it's been a while, so I may end up giving ahistorical answers that mix up the timeline. Appologies in advance for any confusion this causes. If you have more questions or I'm not really getting at what you want to know, please follow up and I'll try again.

First, let me give a little extra context on the status thing. I had also not long before read Impro, which has a big section on status games, and that definitely informed how The e-Myth hit me.

So, there's this way in which managers play high and low. When managers play high they project high confidence. Sometimes this is needed, like when you need to motivate an employee to work on something. Sometimes it's counterproductive, like when you need to learn from an employee. Playing too high status can make it hard for you to listen and for the person you need to listen to to feel like you are listening to them and thus encourage them to tell you what you need to know. Think of the know-it-all manager who can do your job better than you, or the aloof manager uninterested in the details.

Playing low status is often a problem for managers, and not being able to play high is one thing that keeps some people out of management. No one wants to follow a low status leader. A manager doesn't necessarily need to be high status in the wider world, but they at least need to be able to claim higher status than their employees if those employees are going to want to do what they say.

The trouble is, sometimes managers need to play high playing low, like when a manager listens to their employee to understand the problems they are facing in their work, and actually listen rather than immediately dismiss the concerns or round them off to something they've dealt with before. A key technique can be literally lowering oneself, like crouching down to be at eye level of someone sitting at a desk, as this non-verbally makes it clear that the employee is now in the driver seat and the manager is along for the ride.

Effective managers know how to adjust their status when needed. The bests are naturals who never had to be taught. Second best are those who figure out the mechanics and can deploy intentional status play changes to get desired outcomes. I'm definitely not in the first camp. To any extent I'm successful as a manger, it's because I'm in the second.

Ineffective managers, by contrast, just don't understand any of this. They typically play high all the time, even at inappropriate times. That will keep a manager employed, but they'll likely be in the bottom quartile of manager quality, and will only succeed in organizations where little understanding and adaptation is needed. The worst is low playing high status (think Michael Scott in The Office). You only stay a manager if you are low playing high due to organizational disfunction.

Okay, so all that out of the way, the way this worked for me was mostly in figuring out how to play high straight. I grew up with the idea that I was a smart person (because I was in fact more intelligent than lots of people around me, even if I had less experience and made mistakes due to lack of knowledge and wisdom). The archetypal smart person that most closely matched who I seemed to be was the awkward professor type who is a genius but also struggles to function. So I leaned into being that type of person and eschewed feedback I should be different because it wasn't in line with the type of person I was trying to be.

This meant my default status mode was high playing low playing high, by which I mean I saw myself as a high status person who played low, not because he wanted to, but because the world didn't recognize his genius, but who was going to press ahead and precociously aim for high status anyway. Getting into leadership, this kind of worked. Like I had good ideas, and I could convince people to follow them because they'd go "well, I don't like the vibe, but he's smart and been right before so let's try it", but it didn't always work and I found that frustrating.

At the time I didn't really understand what I was doing, though. What I realized, in part, after this particular insight, was that I could just play the status I wanted to straightforwardly. Playing multilayer status games is a defense mechanism, because if any one layer of the status play is challenges, you can fall back one more layer and defend from there. If you play straight, you're immediately up against a challenge to prove you really are what you say you are. So integration looked like peeling back the layers and untangling my behaviors to be more straightforward.

I can't say I totally figured it out from just this one insight. There was more going on that later insights would help me untangle. And I still struggle with it despite having a thorough theory and lots of experience putting it into play. My model of myself is that my brain literally runs slow, in that messages seem to propagate across it less quickly than they do for other people, as suggested by my relatively poor reaction times (+2 sd), and this makes it difficult for me to do high-bandwidth real-time processing of information like is required in social settings like work. All this is to say that I've had to dramatically over-solve almost every problem in my life to achieve normalcy, but I expect most people wouldn't need so much as I have. Make of this what you will when thinking about what this means for me to have integrated insights: I can't rely on S2 thinking to help me in the moment; I have do things with S1 or not at all (or rather with a significant async time delay).

Comment by Gordon Seidoh Worley (gworley) on Fundamental Uncertainty: Chapter 3 - Why don't we agree on what's right? · 2024-08-24T17:30:13.431Z · LW · GW

Note to self: add in a reference to this book as a good intro to Bayesianism: https://www.lesswrong.com/posts/DcEThyBPZfJvC5tpp/book-review-everything-is-predictable-1

Comment by Gordon Seidoh Worley (gworley) on How I started believing religion might actually matter for rationality and moral philosophy · 2024-08-23T23:56:49.369Z · LW · GW

Sure. This happened several times to me, each of which I interpret as a transition from one developmental level to the next, e.g. Kegan 3 -> 4 -> 5 -> Cook-Greuter 5/6 -> 6. Might help to talk about just one of these transitions.

In the Summer of 2015 I was thinking a lot about philosophy and trying to make sense of the world and kept noticing that, no matter what I did, I'd always run into some kind of hidden assumption that acted as a free variable in my thinking that was not constrained by anything and thus couldn't be justified. I had been going in circles around this for a couple years at this point. I was also, coincidentally, trying to figure out how to manage the work of a growing engineering team and struggling because, to me, other people looked like black boxes that I only kind of understood.

In the midst of this I read The e-Myth on the recommendation of a coworker, and in the middle of it there was this line about how effective managers are neither always high or low status, but change how they act based on the situation, and combined with a lot of other reading I was doing this caused a lot of things to click into place.

The phenomenology of it was the same as every time I've had one of these big insights. It felt like my mind stopped for several seconds while I hung out in an empty state, and then I came back online with a deeper understanding of the world. In this case, it was something like "I can believe anything I want" in the sense that there really was some unjustified assumptions being made in my thinking, this was unavoidable, and it was okay because there was no other choice. All I could do was pick the assumptions to be the ones that would be most likely to make me have a good map of the world.

It then took a couple years to really integrate this insight, and it wasn't until 2017 that I really started to grapple with the problems of the next one I would have.

Comment by Gordon Seidoh Worley (gworley) on How I started believing religion might actually matter for rationality and moral philosophy · 2024-08-23T17:58:27.330Z · LW · GW

My own story is a little different, but maybe not too different.

I wrote some of it a while ago in this post. I don't know if I totally endorse the way I framed it there, so let me try again.

For basically as long as I can remember, my moment-to-moment experience of the world sucked. But of course when your every experience feels net negative, you adapt and learn to live with it. But I also have the kind of mind that likes to understand things and won't rest if it doesn't understand the mechanism by which something works, so I regularly turned this to myself. I was constantly dissatisfied with everything, and just when I'd think I'd nailed down why, it would turn out I had missed something huge and had to start over again.

Eventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.

This led me to positive psychology, because I noticed that sometimes I could make my life better, and eventually led me to realize that religions weren't totally full of bunk, despite having been a life-long atheist. I'm not saying they're right about the supernatural—as best I can tell, those claims are just false if interpreted straightforwardly—but I am saying I discovered that one of the things religions try to do is tell you how to live a happy life, and some do a better job of teaching you to do this than others.

To skip ahead, that's what led me to Buddhism, Zen, and eventually practicing enough that I my moment-to-moment experience flipped. Now everything is always deeply okay, even if in a relative sense it's not okay and needs to change, and it was thanks to taking all my skills as a rationalist and then using them with teachings from religion to find my way through.

Comment by Gordon Seidoh Worley (gworley) on Just because an LLM said it doesn't mean it's true: an illustrative example · 2024-08-22T06:06:52.031Z · LW · GW

My experience is that Claude and ChatGPT are tuned to be very agreeable in a way that means they never stand up to you if you ask them to defend something that's probably false but uncertain. The only times they stand up to you is if you ask them about something they're trained not to agree with or talk about, or if you ask something obvious false, like asking it to prove that 2 + 2 = 5.

Comment by Gordon Seidoh Worley (gworley) on I didn't have to avoid you; I was just insecure · 2024-08-19T18:01:04.192Z · LW · GW

> To me these still read like defensive, insecure answers

What might say if you felt like that in that situation?

It's tough to say without more context, but if I really felt like I couldn't say much, I'd probably at least give a "nothing burger" answer like "oh, I was ready for something else" or "we got along as best we could". This might feel like the same thing but the vibes of it are different. A polite avoidance of the question while still engaging with it rather than a more direct shut down.

But of course in most cases I'd probably say more because it would be safe to, up to whatever seemed like a reasonable amount of information to disclose under the circumstances.

Comment by Gordon Seidoh Worley (gworley) on What is "True Love"? · 2024-08-18T19:10:40.432Z · LW · GW

Alas, “false” love can still feel like “true” love from the inside as it’s happening. To tell it’s happening, you’d need to either be really good at keeping a level head, rely on feedback from other people you trust, or just wait until the honeymoon stage passes and find out in hindsight.

The best means I know of dealing with this issue is time. Time to allow yourself to work through your feelings and start to see clearly. It's very driven by biology, so I expect similar timelines to hold for most people:

  • The first 3 months of limerence is the most intense. Depending on how you start dating someone, this could start as early as the first date, but more often starts several dates in, maybe around the time you've spent 10-20 hours physically together.
  • Somewhere in the 3-6 month period you start to notice the cracks.
  • By 1 year you'll be pretty clear headed. Clear enough to make big life decisions, but also still motivated to some degree by exaggerated feelings. This is probably the time you're most likely to feel justified in claiming that you found "true love".
  • Around 3 years you'll be totally free of limerence. This is the point where it will be completely obvious how you really feel and if it's actually "true love".

These numbers are mostly based on my own experiences, but they match what I've heard from others. Probably someone has done a more formal study of relationship/love milestones.

Comment by Gordon Seidoh Worley (gworley) on I didn't have to avoid you; I was just insecure · 2024-08-18T19:00:55.844Z · LW · GW

My comment is going to risk psychologizing you and may end up being unwelcome, but since you've opened up on here about your experience of getting less insecure, I don't think it's out of line. Apologies in advance if it is for you.

Something pinged when I read this line:

You still asked probing questions like “Why did you quit your job?” and “What did you think of your manager? I hear they don't have a great social reputation.”

What pinged is that these don't register to me as probing questions at all! These seem like normal attempts to learn about someone by asking about what is, for most people, a very large part of their life: work.

Then I got to this line:

In the past, I would have felt forced to answer your questions. But I’m sure you can remember how I responded when we spoke again: “Mm, I don’t want to answer that question”, “I don’t want to gossip”, and even a cheeky, “No comment :)”

To me these still read like defensive, insecure answers. Perhaps less insecure and defensive than totally shutting down and running away, but still refusing to engage in what most people would consider socially acceptable and normal questions to ask. If someone kept giving me answers like this my first thought would be "oh, I see, this person doesn't really want to vibe or converse and is making me do all the social labor".

But it seems I unlearned most of mine. I don’t encounter situations that make me anxious in that way anymore, and I can’t imagine any new ones either. Rejecting others (and being rejected by others, same thing) has ceased to carry much unnecessary emotional weight.

Maybe. Or maybe you found a new strategy to suppress your anxiety so you don't have to feel it. I don't know; I don't know your mind. But what you've described pattern matches to a kind of bypassing that is a healthier coping strategy that what you were deploying before, but also doesn't fully address the anxiety.

Again, sorry if this was more pointed a comment than you were hoping for. I offer it only in the spirit of saying the sort of thing I would have liked to have had said to me if I were in your position.

Comment by Gordon Seidoh Worley (gworley) on I didn't think I'd take the time to build this calibration training game, but with websim it took roughly 30 seconds, so here it is! · 2024-08-04T18:20:02.498Z · LW · GW

I'd not heard of websim, but it's really cool. Just spent some time playing with it.

What this most reminds me of is what it was like to build things using HyperCard, but 100x easier because there's no need to do any scripting or custom work to make your idea real, you just type text and iterate. Same sort of idea, but a lot faster and more accessible to people with less computer experience.

It's also got some pretty clear limitations right now, but I expect that to improve with time and effort.

I spent about 45 minutes with it and got it to create an app I've been thinking about for the last couple years, but never find the time to spend the several hours to get it going. Really excited about the possibilities of this!

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-04T16:54:44.873Z · LW · GW

We used to avoid this problem by having special stores that distributed government food programs, but for a variety of reasons people didn't like it (higher labor costs born directly by the government, stigma of going to the government food store, eating generic food ("government cheese")), however this helps deal with the blackmarket problem if you, personally, have to show up to get the food you are owed, and then have literal food to barter with.

I'm not saying we can completely stop economic activity. What I am saying is that there's a lot of benefit to providing tools to help people enforce rules on themselves that they would endorse in hindsight but have trouble enforcing on themselves in the short term due to issues like poor impulse control that are causally upstream of poverty for many.

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-02T20:24:12.791Z · LW · GW

I think you're right that, for many goods, it would be better if we did goods rather than money transfers, similar to how we have SNAP/food stamps, Medicaid/Medicare, etc. programs today. This is because, having been poor myself at times and been around poor people, many people are stuck in poverty not because they want to be but because they have trouble managing money effectively, such that cash transfers would be less effective at improving quality of life than goods transfers. Yes, this is reasoning from anecdata, but I've seen it enough to think it's an important aspect of policy design for wealth transfers.

That said, UBI would serve a somewhat different purpose and in theory subsibdize people who today manage their money fine, but in the future may not have a source of income due to automation from AI.

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-01T18:40:54.919Z · LW · GW

In that case UBI seems like a bad policy in isolation, as it seems like it may only be effective if rent seeking is effectively curtailed.

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-01T18:39:03.641Z · LW · GW

I think this explains what I was concerned about with UBI. The aggregate effects will be zero, but combined with you and others pointing out that it's a wealth transfer from the wealthiest (whether this happens directly or indirectly), UBI may reasonably give people at the bottom of the market sufficient money to become participants.

I think my concerns about rent seeking tanking UBI are perhaps separate from whether UBI can work in theory, although in practice I'm still quite suspicious that rent seeking will prevent UBI from achieving its desired effects.

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-01T18:31:36.540Z · LW · GW

All redistribution schemes would seem to have some risk of this problem, but it seems like a bigger problem if the redistribution is universal. Like if we redistribute wealth to 1% of the population probably not much will happen. If we do it to 10% I suspect we'd see moderate inflation. If we do it to 100% we'll see a lot. In fact we saw exactly this in the US with COVID subsidy payments, as best I can tell.

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-01T03:12:04.628Z · LW · GW

I think the problem you are describing at the bottom of the market is why I expect UBI not to work, because it will fail to do enough to subsidize demand to move anyone up in the market, resulting in a "wealth" transfer that only serves to reduce average purchasing power.

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-01T03:04:31.674Z · LW · GW

Let's consider the case where UBI is created from taxes. The poorest people now receiving at least $X a year. Why would this cause the supply of goods to increase? Wouldn't everything just go up in price by $X in aggregate so that all the additional money at the low end is captured leaving everyone just where they are now, and only curtail marginal luxury spending of high earners?

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-01T02:58:25.163Z · LW · GW

I think my concerns hold even if it's easy to build things. Like suppose there are 100 people and 100 houses. Houses have normally distributed annual costs between $100 and $1000. Before UBI, people have annual income of between $100 and $1000, so in theory everyone can occupy a house (and in this simplified example, assume no one needs anything else).

Then we introduce UBI of $50 a year. It seems to me that all annual housing costs should increase by $50 to capture the free money rather than allow it to be spent on anything else.

This is a very simplified example, but I think it's worth figuring out how UBI can do anything other than simply cause inflation.

Comment by Gordon Seidoh Worley (gworley) on Can UBI overcome inflation and rent seeking? · 2024-08-01T02:48:24.970Z · LW · GW

I'm not assuming money printing or increasing the money supply in general, only increasing the supply of money that recipients have access to.

Money printing seems like one, probably especially bad, way to create UBI, but other options seem better.

Comment by Gordon Seidoh Worley (gworley) on Cat Sustenance Fortification · 2024-08-01T00:21:52.247Z · LW · GW

One downside to the way you've set up the water with it screwed in is that it'll be harder to clean.

I have a similar water dish for them and it doesn't just need to the jug refilled, but for the basin to be cleaned regularly. It gets dirty, I suspect both from mold that attempts to grow in the water and from cat saliva and bits of food that fall in. Also sometimes cats deliberately put food in water before eating it.

Comment by Gordon Seidoh Worley (gworley) on A (paraconsistent) logic to deal with inconsistent preferences · 2024-07-14T22:26:48.081Z · LW · GW

Maybe I'm missing something, but this theory seems to leave out considerations of what's usually the most important aspect of preference models, which is what things are preferred to what. Considering only X > ~X leaves out the many obvious cases of X > Y that we'd like to model.

The usual problem is that we are not time and context insensitive the way simple models are, such that we might feel X > Y under conditions Z, but Y > X under conditions W, and that this is sufficient to explain our seemingly inconsistent preferences because they only look inconsistent on the assumption that we should have the same preferences at all times and under all circumstances. The inclusion of context, such as by adding a time variable to all preference relations, is probably sufficient to rescue the standard preference model: our preferences are consistent at each moment in time, but are not necessarily consistent across different moments because the conditions of each moment are different and thus change what we prefer.

Comment by Gordon Seidoh Worley (gworley) on Trust as a bottleneck to growing teams quickly · 2024-07-14T17:10:35.537Z · LW · GW

Some thoughts on trust and org growth from a different context: the Zen center. (Note that if you've regularly attended any kind of church this will feel very familiar, just with different flavors.)

In Zen, we all come to learn a lot about trust. It starts with learning to trust our experiences. We often don't trust them at first because we're identified with the idea of what we should be experiencing rather than what we're actually experiencing, but over time we settle down and stop trying to fight reality so that we can learn to dance with it.

But the reality of a Zen sangha is that it's not just about individual practice, but about creating something together with the varied people that show up.

First, you've got the drop-ins & newcomers who aren't members or even regular attendees. They don't know the forms and customs, so they require instruction, but also leeway, because we want them to like the Zen center enough to come back and learn the forms! These are folks who can't really be trusted, other than you can trust that they'll do several norm-violating things while in the center. Most places deal with this by having one or more of the senior students assigned to help newcomers get oriented.

Next, you've got the less-committed regulars. People who show up all the time, definitely know what they are supposed to do, but also don't take things too seriously. Most of them aren't trained to preform rituals or hold complex positions, but they are reliably able to do simpler things, like follow the basic forms, know where the brooms are the sweep the porch, and even be able to instruct newcomers in some simple tasks like finding the bathroom or how to dust the moulding.

Among the regulars, some are committed and serious. This includes all the senior students, but also most of the junior students and occasionally people who aren't very dedicated to practice but are dedicated to showing up. These are the people who get trusted to take on formal positions, like ringing bells, caring for the alter, leading chants, and instructing newcomers. They can all be trusted to follow the forms, know what they are supposed to do, and may be asked to correct others. This is not to say they never make mistakes, but they are all known quantities. If someone has no rhythm or can't sing, you don't ask them to play the mokugyo or to lead chants, but maybe you do train them in alter care or some other tasks they are better suited for.

And then there's the teacher (and sometimes separately an abbot), who sits as the source of trust in the running of the center. Trust is extended from the teacher to the senior students and then on down in a hierarchy (and the hierarchy in the running of a center is explicit and expected to be upheld). So the teacher, say, trusts a student to run work period, and that student then extends trust to each person within the task they've been asked to do. If they can't trust a person to do what they want, they have to train them, and the teacher trusts the student will oversee their training or successfully delegate the training.

In reverse, the students extend trust to the teacher. This is different sort of trust because it's not about following norms so much as it is about students trusting that the teacher will use their position of authority ethically and for the benefit of their students. The teacher is there to be an authority on the dharma and to help students learn it and ultimately to help them wake up. If the students lose trust in their teacher, they'll wander off to another teacher or maybe leave Zen all together.

To me the analogies with a business are obvious. Bosses, managers, and supervisors extend trust to their subordinates, and doing so requires an understanding of what they can trust (expect with high confidence) each person to do. In reverse, subordinates must trust their leaders to have their best interests and the interests of the company in mind, and people often quit when they lose trust in their boss.

As a final note, I think of trust as one of the key building blocks of civilized life, and the more trust we can extend to each other, the more civilized life becomes. High trust requires that everyone actually do what they are trusted to do, and this applies not just on the scale of Zen centers and businesses, but also to countries and even the whole planet.

Comment by Gordon Seidoh Worley (gworley) on Trust as a bottleneck to growing teams quickly · 2024-07-14T16:30:12.059Z · LW · GW

The way I've heard this advice phrase is that growth is bottlenecked by the number of people you can fully trust. Trust zero people? You're gonna have to manage every detail yourself. Trust one person? Now you and they can manage every detail within your scope. And so on.

Partial trust is not as good, because you'll still have to check up on some thing, and you often need a lot of context to check up on even a few details of another person's work. The difference between 95% trust and 100% trust is huge.

Comment by Gordon Seidoh Worley (gworley) on Finding the Wisdom to Build Safe AI · 2024-07-05T15:36:21.097Z · LW · GW

Seems reasonable. I do still worry quite a bit about Goodharting, but perhaps this could be reasonably addressed with careful oversight by some wise humans to do the wisdom equivalent of red teaming.

Comment by Gordon Seidoh Worley (gworley) on Finding the Wisdom to Build Safe AI · 2024-07-05T15:34:22.665Z · LW · GW

This is a place where my Zen bias is showing through. When I wrote this I was implicitly thinking about the way we have a system of dharma transmission that, at least as we practice Zen in the west, also grants teaching authorization, so my assumption was that if we feel confident certifying an AI as wise, this would imply also believing it to be wise and skilled enough to teach what it knows. But you're right, these two aspects, wisdom and teaching skill, can be separated, and in fact in Japan this is the case: dharma transmission generally comes years before teaching certification is granted, and many more people receive transmission than are granted the right to teach.

Comment by Gordon Seidoh Worley (gworley) on Finding the Wisdom to Build Safe AI · 2024-07-05T15:28:03.694Z · LW · GW

I'm not sure what this comment is replying to. I don't think it's likely that AI will be very human-like, nor do I have special reason to advocate for human-like AI designs. I do note that some aspects of training wise AI may be easier if AI were more like humans, but that's contingent on what I consider to be the unlikely possibility of human-like AI.