Posts

Which things were you surprised to learn are metaphors? 2024-11-22T03:46:02.845Z
Fundamental Uncertainty: Epilogue 2024-11-16T00:57:48.823Z
Fundamental Uncertainty: Chapter 9 - How do we live with uncertainty? 2024-11-07T18:15:45.049Z
Word Spaghetti 2024-10-23T05:39:20.105Z
Can UBI overcome inflation and rent seeking? 2024-08-01T00:13:51.693Z
Finding the Wisdom to Build Safe AI 2024-07-04T19:04:16.089Z
How was Less Online for you? 2024-06-03T17:10:33.766Z
Fundamental Uncertainty: Chapter 8 - When does fundamental uncertainty matter? 2024-04-26T18:10:26.517Z
Dangers of Closed-Loop AI 2024-03-22T23:52:22.010Z
On "Geeks, MOPs, and Sociopaths" 2024-01-19T21:04:48.525Z
A discussion of normative ethics 2024-01-09T23:29:11.467Z
Extrapolating from Five Words 2023-11-15T23:21:30.865Z
Fundamental Uncertainty: Chapter 1 - How can we know what's true? 2023-08-13T18:55:44.861Z
Physics is Ultimately Subjective 2023-07-14T22:19:01.151Z
Optimal Clothing 2023-05-31T01:00:37.541Z
How much do personal biases in risk assessment affect assessment of AI risks? 2023-05-03T06:12:57.001Z
Fundamental Uncertainty: Chapter 7 - Why is truth useful? 2023-04-30T16:48:58.312Z
Industrialization/Computerization Analogies 2023-03-27T16:34:21.659Z
Fundamental Uncertainty: Chapter 6 - How can we be certain about the truth? 2023-03-06T13:52:09.333Z
Feelings are Good, Actually 2023-02-21T02:38:11.793Z
How much is death a limit on knowledge accumulation? 2023-02-14T03:54:16.070Z
Acting Normal is Good, Actually 2023-02-10T23:35:41.043Z
Religion is Good, Actually 2023-02-09T06:34:12.601Z
Drugs are Sometimes Good, Actually 2023-02-08T02:24:24.152Z
Sex is Good, Actually 2023-02-05T06:33:26.027Z
Small Talk is Good, Actually 2023-02-04T00:38:21.935Z
Exercise is Good, Actually 2023-02-02T00:09:18.143Z
Nice Clothes are Good, Actually 2023-01-31T19:22:06.430Z
Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact 2023-01-19T01:15:09.693Z
MacArthur BART (Filk) 2023-01-02T22:50:04.248Z
Fundamental Uncertainty: Chapter 5 - How do we know what we know? 2022-12-28T01:28:50.605Z
[Fiction] Unspoken Stone 2022-12-20T05:11:23.231Z
The Categorical Imperative Obscures 2022-12-06T17:48:01.591Z
Contingency is not arbitrary 2022-10-12T04:35:07.407Z
Truth seeking is motivated cognition 2022-10-07T19:19:27.456Z
Quick Book Review: Crucial Conversations 2022-09-19T06:25:23.052Z
Keeping Time in Epoch Seconds 2022-09-10T00:28:08.137Z
Fundamental Uncertainty: Chapter 4 - Why don't we do what we think we should? 2022-08-29T19:25:16.917Z
Fundamental Uncertainty: Chapter 3 - Why don't we agree on what's right? 2022-06-25T17:50:37.565Z
Fundamental Uncertainty: Chapter 2 - Why do words have meaning? 2022-04-18T20:54:24.539Z
Modect Englich Cpelling Reformc 2022-04-16T23:38:50.212Z
Good Heart Donation Lottery Winner 2022-04-08T20:34:41.104Z
How I Got So Much GHT 2022-04-07T03:59:36.538Z
What are rationalists worst at? 2022-04-06T23:00:08.600Z
My Recollection of How This All Got Started 2022-04-06T03:22:48.988Z
You get one story detail 2022-04-05T04:38:36.022Z
Software Engineering: Getting Hired and Promoted 2022-04-04T22:31:52.967Z
My Superpower: OODA Loops 2022-04-04T01:51:46.622Z
How Real Moral Mazes (in Bay Area startups)? 2022-04-03T18:08:54.220Z
Becoming a Staff Engineer 2022-04-03T02:30:12.951Z

Comments

Comment by Gordon Seidoh Worley (gworley) on Vegans need to eat just enough Meat - emperically evaluate the minimum ammount of meat that maximizes utility · 2024-12-23T00:10:05.452Z · LW · GW

If someone has gone so far as to buy supplements, they have already done far more to engineer their nutrition than the vegans who I've known who struggle with nutrition.

Comment by Gordon Seidoh Worley (gworley) on Good Reasons for Alts · 2024-12-23T00:08:11.997Z · LW · GW

I generally avoid alts for myself, and one of the benefits I see is that I feel the weight of what I'm about to post.

Maybe I would sometimes writer funnier, snarkier things on Twitter that would get more likes, but because my name is attached I'm forced to reconsider. Is this actually mean? Do I really believe this? Does this joke go to far?

Strange to say perhaps, but I think not having alts makes me a better person, in the sense of being better at being the type of person I want to be, because I can't hide behind anonymity.

Comment by Gordon Seidoh Worley (gworley) on The nihilism of NeurIPS · 2024-12-22T23:59:12.499Z · LW · GW

Thanks for writing this up. This is something I think a lot of people are struggling with, and will continue to struggle with as AI advances.

I do have worries about AI, mostly that it will be unaligned with human interests and we'll build systems that squash us like bugs because they don't care if we live or die. But I have no worries about AI taking away our purpose.

The desire to feel like one has a purposes is a very human characteristic. I'm not sure that any other animals share our motivation to have a motivation. In fact, past humans seemed to have less of this, too, if reports of extant hunter-gatherer tribes are anything to go by. But we feel like we're not enough if we don't have a purpose to serve. Like our lives aren't worth living if we don't have a reason to be.

Maybe this was a historically adaptive fear. If you're in a small band or living in a pre-industrial society, every person had a real cost to existing. Societies existed up against the Malthusian limit, and there was no capacity to feed more mouths. You either contributed to society, or you got cast out, because everyone was in survival mode, and surviving is what we had to do to get here.

But AI could make it so that literally no one has to work ever again. Perhaps we will have no purpose to serve to ensure our continued survival if we get it right. Is that a problem? I don't think it has to be!

Our minds and cultures are build around the idea that everyone needs to contribute. People internalize this need, and one way it can come out is as feeling like life is not worth living without purpose.

But you do have a purpose, and it's the same one all living things share: to exist. It is enough to simply be in the world. Everything else is contingent on what it takes to keep existing.

If AI makes it so that no one has to work, that most of us our out of jobs, that we don't even need to contribute to setting our own direction, that need not necessarily be bad. It could go badly, yes, but it also could be freeing to be as we wish, rather than as we must.

I speak from experience. I had a hard time seeing that simply being is enough. I've also met a lot of people who had this same difficulty, because it's what draws them to places like the Zen center where I practice. And everyone is always surprised to discover, sometimes after many years of meditation, that there was never anything that needed to be done to be worthy of this life, and if we can eliminate the need to do things to get to keep living this life, so that none may need lose it due to accident or illness or confusion or anything else, then all the better.

Comment by Gordon Seidoh Worley (gworley) on Vegans need to eat just enough Meat - emperically evaluate the minimum ammount of meat that maximizes utility · 2024-12-22T23:41:45.408Z · LW · GW

I want to push back a little in that I was fully vegan for a few years with no negative side effects, other than sometimes being hungry because there was nothing I would eat and annoying my friends with requests to accommodate my dietary preferences. I even put on muscle and cut a lot of fat from my body!

I strongly suspect, based on experience with lots of other vegans, that vegans who struggle with nutritional deficiencies are bad at making good choices about macro nutrients.

Broadly speaking, the challenge in a vegan diet is getting enough lysine. Most every other nutrient you need is found in abundance, but lysine is tricky because humans mostly get that amino acid from meat. Getting enough isn't that hard if you know what to eat, but you have to eat enough of it in enough volume to avoid problems.

What does it take to get enough lysine? Beans, lots of beans! If you're vegan and not eating beans you are probably lysine deficient and need to eat more beans. How many beans? Way more than you think. Beans have lots of fiber and aren't nutrient dense like meat.

I met lots of vegans who didn't eat enough beans. They'd eat mushrooms, but not enough, and lots of other protein sources, but not ones with enough lysine. They'd just eat a random assortment of vegan things without really thinking hard about if they were eating the right things. It's a strategy that works if you eat a standard diet that's been evolved by our culture to be relatively complete, but not eating a constructed diet like modern vegans do.

Now, I have met a few people who seem to have individual variation issues that make it hard for them to eat vegan and stay healthy. In fact, I'm now one of those, because I developed some post-COVID food sensitivities that forced me to go vegetarian and then start eating meat when that wasn't enough. And some people seem to process protein differently in a way that is weird to me but they insist if they don't eat some meat every 4 hours or so they feel like crap.

So I'm not saying there aren't some people who do need to eat meat and just reduce the amount and that's the best they can safely do, but I'm also saying that I think a lot of vegans screw up not because they don't eat meat but because they don't think seriously enough about if they are getting enough lysine every day.

Comment by Gordon Seidoh Worley (gworley) on Being Present is Not a Skill · 2024-12-21T05:04:43.077Z · LW · GW

What would it mean for this advice to not generalize? Like what cases are you thinking of where what someone needs to do to be more present isn't some version of resolving automatic predictions of bad outcomes?

I ask because this feels like a place where disagreeing with the broad form of the claim suggests you disagree with the model of what it means to be present rather than that you disagree with the operationalization of the theory, which is something that might not generalize.

Comment by Gordon Seidoh Worley (gworley) on Being Present is Not a Skill · 2024-12-21T04:56:55.047Z · LW · GW

I think you still have it wrong, because being present isn't a skill. It's more like an anti-skill: you have stop doing all the stuff you're doing that keeps you from just being.

There is, instead, a different skill that's needed to make progress towards being present. It's a compound skill around noticing what you do out of habit rather than in response to present conditions, figuring out why you have those habits, practice not engaging in those habits when you otherwise would, and thereby developing trust that you can safely drop those habits, thus retraining yourself to do less out of habit and be closer to just being and responding.

Comment by Gordon Seidoh Worley (gworley) on Information vs Assurance · 2024-12-09T18:09:55.268Z · LW · GW

I can't think of a time where such false negatives were a real problem. False positives, in this case, are much more costly, even if the only cost is reputation.

If you never promise anything that could be a problem. Same if you make promises but no one believes them. Being able to make commitments is sometimes really useful, so you need to at least keep live the ability to make and hit commitments so you can use them when needed.

Comment by Gordon Seidoh Worley (gworley) on Being at peace with Doom · 2024-12-06T01:57:20.194Z · LW · GW

As AI continues to accelerate, the central advice presented in this post to be at peace with doom will become incresingly important to help people stay sane in a world where it may seem like there is no hope. But really there is hope so long as we keep working to avert doom, even if it's not clear how we do that, because we've only truly lost when we stop fighting.

Comment by Gordon Seidoh Worley (gworley) on Recreating the caring drive · 2024-12-06T01:54:36.299Z · LW · GW

I'd really like to see more follow up on the ideas made in this post. Our drive to care is arguably why we're willing to cooperate, and making AI that cares the same way we do is a potentially viable path to AI aligned with human values, but I've not seen anyone take it up. Regardless, I think this is an important idea and think folks should look at it more closely.

Comment by Gordon Seidoh Worley (gworley) on You don't get to have cool flaws · 2024-12-06T01:52:48.917Z · LW · GW

This post makes an easy to digest and compelling case for getting serious about giving up flaws. Many people build their identity around various flaws, and having a post that crisply makes the case that doing so is net bad is helpful to be able to point people at when you see them suffering in this way.

Comment by Gordon Seidoh Worley (gworley) on Teleosemantics! · 2024-12-06T01:50:45.801Z · LW · GW

I think this post is important because it brings old insights from cybernetics into a modern frame that relates to how folks are thinking about AI safety today. I strongly suspect that the big idea in this post, that ontology is shaped by usefulness, matters greatly to addressing fundamental problems in AI alignment.

Comment by Gordon Seidoh Worley (gworley) on Orca communication project - seeking feedback (and collaborators) · 2024-12-03T18:10:34.117Z · LW · GW

I'm less confident than you are about your opening claim, but I do think it's quite likely that we can figure out how to communicate with orcas. Kudos for just doing things.

I'm not sure how it would fit with their mission, but maybe there's a way you could get funding from EA Funds. It doesn't sound like you need a lot of money.

Comment by Gordon Seidoh Worley (gworley) on 2024 Unofficial LessWrong Census/Survey · 2024-12-03T06:38:11.254Z · LW · GW

Completed

Comment by Gordon Seidoh Worley (gworley) on Which Biases are most important to Overcome? · 2024-12-01T18:53:42.162Z · LW · GW

The Typical Mind Fallacy is the most important bias in human reasoning.

How do I know? Because it's the one I struggle with the most!

Comment by Gordon Seidoh Worley (gworley) on What epsilon do you subtract from "certainty" in your own probability estimates? · 2024-11-27T06:49:29.374Z · LW · GW

Back when I tried playing some calibration games, I found I was not able to get successfully calibrated above 95%. At that point I start making errors from things like "misinterpreting the question" or "randomly hit the wrong button" and things like that.

The math is not quite right on this, but from this I've adopted a personal 5% error margin policy, this seems to practically be about the limit of my ability to make accurate predictions, and it's served me well.

Comment by Gordon Seidoh Worley (gworley) on Which things were you surprised to learn are metaphors? · 2024-11-27T06:44:06.671Z · LW · GW

What does this mean?

Comment by Gordon Seidoh Worley (gworley) on Which things were you surprised to learn are not metaphors? · 2024-11-22T03:46:58.172Z · LW · GW

I like this question a lot, but I'm more interested in its opposite, so I asked it!

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-21T16:50:58.369Z · LW · GW

Yes, this is why I like the movie better than the short story. PKD did more of what Total Recall did in other stories, like Ubik and A Scanner Darkly and The Man Who Japed, but never sends it fully the way Total Recall does.

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-20T20:59:41.812Z · LW · GW

Quest (1984)

This movie was written by Ray Bradbury.

It's about people who have 8 day lifespans, and follows the story of a boy who grows up to fulfill a great quest. I like it from a rationalist standpoint because it has themes similar to those we have around AI, life extension, and more: we have a limited to achieve something, and if we don't pull it off we are at least personally doomed, and maybe societally, too.

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-20T20:56:05.807Z · LW · GW

PT Barnum (1999)

This is a made for TV movie that can easily be found for free on YouTube.

I like it because it tells a somewhat fictionalized account of PT Barnum's life that shows him as an expert in understanding the psychology of people and figuring out how to give them products they'll love. Some might say what he does is exploitative, but the movie presents him as not much different than modern social media algorithms that give us exactly what we want, even if we regret it in hindsight.

The rationalist angle is coming away with a sense of what's it's like to be a live player who is focused on achieving something and in deep contact with reality to achieve it, willing to ignore social scripts in order to get there.

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-20T20:46:35.575Z · LW · GW

Total Recall (1990)

Based on the Phillip K. Dick short story "We Can Remember It For You Wholesale". The movie is better than the short story.

I can't tell you why this is a rationality movie without spoilers...

The movie masterfully sucks you into a story where you don't know if you're watching what's really happening, or if you're watching the false memories inserted into the protagonists mind at the start of the film. Much of the fun for rationalists would be trying to figure out if the film was reality or implanted memory.

Comment by gworley on [deleted post] 2024-11-20T06:15:13.258Z

It's not quite like the dot com bust. Bottom of the market is very soft, with new grads basically having no options, but the top of the market is extremely tight, with the middle doing about like normal. Employers feel they can be more choosy right now for all roles, though, so they are being. That will change if roles sit unfilled for longer.

Comment by Gordon Seidoh Worley (gworley) on The Choice Transition · 2024-11-18T18:52:22.096Z · LW · GW

How would you compare your ideas here to Asimov's fictional science of psychohistory? I ask because while reading this post I kept getting flashbacks to Foundation.

Comment by Gordon Seidoh Worley (gworley) on Fundamental Uncertainty: Chapter 9 - How do we live with uncertainty? · 2024-11-08T01:03:56.891Z · LW · GW

Yes, red is perhaps the most useful to color to be able to see! That's why I chose to use it in this example.

Comment by gworley on [deleted post] 2024-11-05T03:14:31.323Z

I don't know, but I can say that after a lot of hours of Alexander lessons my posture and movement improved in ways that would be described as "having less muscle tension" and this having less tension happened in conjunction with various sorts of opening and being more awake and moving closer to PNSE.

Comment by Gordon Seidoh Worley (gworley) on Death notes - 7 thoughts on death · 2024-10-29T03:25:36.676Z · LW · GW

Thank you for sharing your thoughts, and sorry for your losses. It's often hard to talk about death, especially about the deaths of those we love. I don't really have anything other to say than that I found this moving to read, and I'm glad you shared it with us.

Comment by Gordon Seidoh Worley (gworley) on somebody explain the word "epistemic" to me · 2024-10-28T17:16:29.344Z · LW · GW

Here's more answer than you probably wanted.

First up, the word "epistemic" solves a limitation of the word "knowledge" in that it doesn't easily turn into an adjective. Yes, like all nouns in English it can be used like an adjective in the creation of noun phrases, but "knowledge state" and "knowledge status" don't sound as good.

But more importantly there's a strong etymological reason to prefer the word "epistemic" in these cases. "Epistemic" comes from "episteme", one of Greek's words for knowledge[1]. Episteme is knowledge that is justified by observation and reason, and importantly is known because the knower was personally convinced of the justification, as opposed to gnosis, where the only justification is experience, or doxa, which is second-hand knowledge[2].

Thus "epistemic" carries with it the connotation of being related to justified beliefs. An "epistemic state" or "epistemic status" implies a state or status related to how justified one's beliefs are.

  1. ^

    "Knowledge" is cognate with another Greek word for knowledge, "gnosis", but the two words evolved along different paths from PIE *gno-, meaning "know".

  2. ^

    We call doxa "hearsay" in English, but because of that word's use in legal contexts, it carries some pejorative baggage related to how hearsay is treated in trials. To get around this we often avoid the word "hearsay" and instead focus on our level of trust in the person we learned something from, but won't make a clear distinction between hearsay and personally justified knowledge.

Comment by Gordon Seidoh Worley (gworley) on The hostile telepaths problem · 2024-10-27T20:30:49.232Z · LW · GW

I'm sure my allegiance to these United States was not created just by reciting the Pledge thousands of times. In fact, I resented the Pledge for a lot of my life, especially once I learned more about its history.

But if I'm honest with myself, I do feel something like strong support for the ideals of the United States, much stronger than would make sense if someone had convinced me as an adult that its founding principals were a good idea. The United States isn't just my home. I yearn for it to be great, to embody its values, and to persist, even as I disagree with many of the details of how we're implementing the dream of the founders today.

Why do I think the Pledge mattered? It helped me get the feeling right. Once I had positive feelings about the US, of course I wanted to actually like the US. I latched onto the part of it that resonates with me: the founding principals. Someone else might be attracted to something else, or maybe would even find they don't like the United States, but stay loyal to it because they have to.

I'm also drawing on my experience with other fake-it-until-you-make-it rituals. For example, I and many people really have come to feel more grateful for the things we have in life by explicitly acknowledge that gratitude. At the start it's fake: you're just saying words. But eventually those words start to carry meaning, and before long it's not fake. You find the gratitude that was already inside you and learn how to express it.

In the opening example, I bet something similar could work for getting kids to appologize. No need to check if they are really sorry, just make them say sorry. Eventually the sadness at having caused harm will become real and flow into the expression of it. It's like a kind of reverse training, where you create handles for latent behaviors to crystalize around, and by creating the right conditions when the ritual is performed, you stand a better-than-chance possibility of getting the desired association.

Comment by Gordon Seidoh Worley (gworley) on The hostile telepaths problem · 2024-10-27T18:48:05.584Z · LW · GW

Some cultures used to, and maybe still do, have a solution to the hostile telepaths problem you didn't list: perform rituals even if you don't mean them.

If a child breaks their mom's glasses, the mom doesn't care if they are really sorry or not. All she cares about is if they perform the sorry-I-broke-your-glasses ritual, whatever that looks like. That's all that's required.

The idea is that the meaning comes later. We have some non-central instances of this in Western culture. For example, most US school children recite the Pledge of Allegiance every day (or at least they used to). I can remember not fully understanding what the words meant until I was in middle school, but I just went along with it. And wouldn't you know it, it worked! I do have an allegiance to the United States as a concept.

The world used to be more full of these rituals and strategies for appeasing hostile telepaths, who just chose not to use their telepathy because everyone agreed it didn't matter so long as the rituals were performed. But the spread of Christianity and Islam has brought a demand for internalized control of behaviors to much of the world, and with it we get problems like shame and guilt.

Now I'm not saying that performing rituals even if you don't mean them is a good solution. There are a lot of tradeoffs to consider, and guilt and shame offer some societal benefits that enable higher trust between strangers. But it is an alternative solution, and one that, as my Pledge of Allegiance example suggests, does sometimes work.

Comment by Gordon Seidoh Worley (gworley) on Word Spaghetti · 2024-10-24T17:11:48.029Z · LW · GW

Many ideas are hard to fully express in words. Maybe no idea can be precisely and accurately captured. Something is always left out when we use our words.

What I think makes some people faster (and arguably better) writers is that they natively think in terms of communication with others, whereas I natively think in terms of world modeling, and then try to come up with words that explain the word model. They don't have to go through a complex thought process to figure out how to transmit their world model to others, because they just say thing that convey the messages that exist in their head, and those messages are generated based on their model of the world.

Comment by Gordon Seidoh Worley (gworley) on Word Spaghetti · 2024-10-24T17:04:06.087Z · LW · GW

Yep! In fact, an earlier draft of this post included a mention of Paul Graham, because he's a popular and well-liked example of someone who has a similar process to the one I use (though I don't know if he does it for the same reasons).

In that earlier draft, I contrasted Graham with Scott Alexander, who I vaguely recall mentioning that he basically sits down at his computer and a couple hours later a finish piece of writing has appeared. But I couldn't find a good reference of this being Scott's process, so maybe it's just a thing I talked with him about in person one time.

In the end I decided this was an unnecessary tangent for the body of the text, but I'm very glad to have a chance to talk about it in the comments! Thanks!

Comment by Gordon Seidoh Worley (gworley) on [Intuitive self-models] 6. Awakening / Enlightenment / PNSE · 2024-10-23T04:28:05.252Z · LW · GW

As of late July last year, "I" am in PNSE. A few comments.

First, no major errors or concerns when reading the post. I might have missed something, but nothing triggered the "this is misunderstanding what PNSE is fundamentally like" alarm.

Second, there's a lot of ways PNSE is explained. I like this short version: "I am me". That is, "I", the subject of experience, no longer experiences itself as subject, but rather as object, i.e. "me". It's like having a third-person experience of the self. I also like to describe it as thought becoming a sense, like vision or hearing, because "I" no longer do the thinking; instead this person does the thinking to me.

Third, not everyone describes it this way, but in Zen we call the transition into PNSE the Great Death because it literally feels like dying. It's not dissimilar from the ego death people experience on drugs like LSD, but ego "death" is better described as ego "sleep" because it comes back and, after it's happened once, the mind knows the ego is going to come back, whereas in the Great Death the sense of separate self is gone and not coming back. All that said, many with PNSE don't experience a violent transition like this, so the Great Death or something like it may be a contingent feature of some paths to PNSE and not others.

Fourth, I don't remember if the paper discusses this, and this is controversial among some Buddhist traditions, but PNSE doesn't mean the mind is totally liberated from belief in a separate self. You said the homunculus concept lies dormant, but I'd say it does more than that. The mind is filled with many beliefs that presupposed the existence of the homunculus, and even if the homunculus is no longer part of experiences of the world, it's still baked into habits of behavior, and it takes significant additional work once in PNSE to learn new habits to replace the old ones that don't have the homunculus baked into them. Very few people ever become free of all of them, and maybe literally no one does as long as they continue to live.

Fifth and finally, PNSE is great, I'm glad it's how I am now. It's also fine not to be in it, because even if you believe you have a homunculus, in an absolute sense you already don't, you're just confused about how the world works, and that's okay, we're all confused. PNSE is also confused, but in different ways, and with fewer layers of confusion. So if you read this post and are now excited to try for PNSE, great, do it, but be careful. Lots of people Goodhart on what they think PNSE is because they try too hard to get it. If PNSE doesn't sneak up on you, then be extra suspect of Goodharting! (Actually, just always be suspicious that you've Goodharted yourself!)

Comment by Gordon Seidoh Worley (gworley) on Information vs Assurance · 2024-10-20T23:48:47.673Z · LW · GW

The information/assurance split feels quite familiar to me as an engineering manager.

My work life revolves around projects, especially big projects that takes months to complete. Other parts of the business depend on when these projects will be done. In some cases, the entire company's growth plans may hinge on my team completing a project by a certain time. And so everyone wants as much assurance as possible about when projects will complete.

This makes it really hard to share information, because people are so hungry for assurance they will interpret almost any sharing of information as assurance. A typical conversation I used to have when I was naive to this fact:

Sales manager: Hey, Gordon, when do you think that project will be done?

Me: Oh, if things go according to plan, probably next month.

Sales manager: Cool, thanks for the update!

If the project ships next month, no problem. But as often happens in software engineering, if the project gets delayed, now the sales manager is upset:

Them: Hey, you said it would be ready next month. What gives?

Me: I said if things went according to plan, but there were surprises, so it took us longer than we initially though it would.

Them: Dammit. I sold a customer on the assumption that the project was shipping this month! What am I supposed to tell them now?

Me: I don't know, why did you do that? I was giving you an internal estimate, not a promise of delivery.

Them: You said this month. I'm tired of Engineering always having some excuse about why stuff is delayed.

What did I do wrong? I failed to understand that Sales, and most other functions in a software business, are so dependent and hungry for information from Engineering, that they saw the assurance they wanted to see rather than the information I was giving.

I've (mostly) learned my lesson. I have to carefully control how much I say to anyone not directly involved in the project, lest they get the wrong idea.

Someone: Hey, Gordon, when do you think that project will be done?

Me: We're working on it. We set a goal of having it complete by end of next quarter.

Do I actually expect it to take all the way to next quarter? No. Most likely it'll be done next month. But if anything unexpected happens, now I've given a promise I can keep.

This isn't exactly just "underpromise, overdeliver". That's part of it, but it's also about noticing when you're accidentally making a promise, even when you think you're not, even if you say really explicitly that you're not making a promise, someone will interpret as a promise and now you'll have to deal with that.

Comment by Gordon Seidoh Worley (gworley) on The Hopium Wars: the AGI Entente Delusion · 2024-10-14T18:30:48.881Z · LW · GW

I defined tool AI specifically as controllable, so AI without a quantitative guarantee that it's controllable (or "safe", as you write) wouldn't meet the safety standards and its release would be prohibited.

If your stated definition is really all you mean by tool AI, then you've defined tool AI in a very nonstandard way that will confuse your readers.

When most people hear "tool AI", I expect them to think of AI like hammers: tools they can use to help them achieve a goal, but aren't agentic and won't do anything on their own they weren't directly asked to do.

You seem to have adopted a definition of "tool AI" that actually means "controllable and goal-achieving AI", but give no consideration to agency, so I can only conclude from your writing that you would mean for AI agents to be included as tools, even if they operated independently, so long as they could be controlled in some sense (what sense control takes exactly you never specify). This is not what I expect most people to expect someone to mean by a "tool".

Again, I like all the reasoning about entente, but this use of the word "tool AI" is confusing, maybe even deceptive (I assume that was not the intent!). It also leaves me felling like your "solution" of tool AI is nothing other than a rebrand of what we've already been talking about in the field variously as safe, aligned, or controllable AI, which I guess is fine, but "tool AI" is a confusing name for that. This also further downgrades my opinion of the solution section, since as best I can tell it's just saying "build AI safely" without enough details to be actionable.

Comment by Gordon Seidoh Worley (gworley) on The Hopium Wars: the AGI Entente Delusion · 2024-10-13T18:39:05.665Z · LW · GW

What do you make of the extensive arguments that tool AI are not actually safer than other forms of AI, and only look that way on the surface by ignoring issues of instrumental convergence to power-seeking and the capacity for tool AI to do extensive harm even if under human control? (See the Tool AI page for links to many posts tackling this question from different angles.)

(Also, for what it's worth, I was with you until the Tool AI part. I would have liked this better if it had been split between one post arguing what's wrong with entente and one post arguing what to do instead.)

Comment by Gordon Seidoh Worley (gworley) on Values Are Real Like Harry Potter · 2024-10-10T07:19:44.538Z · LW · GW

I agree with the main claim of this post, mostly because I came to the same conclusion several years ago and have yet to have my mind changed away from it in the intervening time. If anything, I'm even more sure that values are after-the-fact reifications that attempt to describe why we behave the way we do.

Comment by Gordon Seidoh Worley (gworley) on [Intuitive self-models] 3. The Homunculus · 2024-10-04T23:01:41.829Z · LW · GW

Anyway, after a bit more effort, I found the better search term, hara, and lots of associated results that do seem to back up Johnstone’s claim (if I’m understanding them right—the descriptions I’ve found feel a bit cryptic). Note, however, that Johnstone was writing 45 years ago, and I have a vague impression that Japanese people below age ≈70 probably conceptualize themselves as being in the head—another victim of the ravages of global cultural homogenization, I suppose. If anyone knows more about this topic, please share in the comments!

I'm not Japanese, but I practice Zen, so I'm very familiar with the hara. I can't speak to what it would be like to have had the belief that my self was located in the hara, but I can talk about its role in Zen.

Zen famously, like all of Buddhism, says that there's no separate self, i.e. the homunculus isn't how our minds works. A common strating practice instruction in Zen is the meditate on the breath at the hara, which is often described as located about 2 inches inside the body from the bellybutton.

This 2 inch number assumes you're fairly thin, and it may not be that helpful a way to find the spot, anyway. I instead tell people to find it by feeling for where the very bottom of their diaphragm is. It feels like the lowest point in the body that activates to contract at the start of the breath, and is the lowest point in the body that relaxes when a breath finishes.

Some Zen teachers say that hara is where attention starts, as part of a broader theory that attention/awareness cycles with the breath. I wrote about this a bit previously in a book review. I don't know if that's literally true, but as a practice instruction it's effective to have people put their attention on the hara and observe their breathing. This attention on the breath at a fixed point can induce a pleasant trance state that often creates jhana, and longer term helps with the nervous system regulation training meditation performs.

It takes most people several hundred to a few thousand hours to be able to really stabilize their attention on the hara during meditation, although the basics of it can be grasped within a few dozen hours.

Comment by Gordon Seidoh Worley (gworley) on Eye contact is effortless when you’re no longer emotionally blocked on it · 2024-10-03T15:37:32.934Z · LW · GW

One practice we have done at times at my Zen center during sesshins is eye gazing practice. In it, you sit across from someone and just look into their eyes silently for several minutes while they do the same. That's it. Simple, but really effective way to feel into the nonseparate, embeddedness of living.

Comment by Gordon Seidoh Worley (gworley) on Information dark matter · 2024-10-02T00:28:23.973Z · LW · GW

This seems like a fine topic, but FYI I ended up giving it a downvote because I gave up reading part way through, starting skimming, and ironically most of what's in this post turned into information dark matter because I lost faith that I'd gain more from reading it than skimming. I'd have preferred a more condensed post.

Comment by Gordon Seidoh Worley (gworley) on A Path out of Insufficient Views · 2024-09-25T03:10:09.739Z · LW · GW

There's people who identify more with System 2. And they tend to believe truth is found via System 2 and that this is how problems are solved.

There's people who identify more with System 1. And they tend to believe truth is found via System 1 and that this is how problems are solved.

(And there are various combinations of both.)

 

I've been thinking about roughly this idea lately.

There's people who are better at achieving their goals using S2, and people who are better at achieving their goals using S1, and almost everyone is a mix of these two types of people, but are selectively one of these types of people in certain contexts and for certain goals. Identifying with S2 or S1 then comes from observing which tends to do a better job of getting you what you want, so it starts to feel like that's the one that's in control, and then your sense of self gets bound up with whatever mental experiences correlate with getting what you want.

For me this has shown up as being a person who is mostly better at getting what he wants with S2, but my S2 is unusually slow, so for lots of classes of problems it fails me in the moment even if it knew what to do after the fact. Most of my most important personal developments have come on the back of using S2 long enough to figure out all the details of something so that S1 can take it over. A gloss of this process might be to say I'm using intelligence to generate wisdom.

I get the sense that other people are not in this same position. There's a bunch of people for whom S2 is fast enough that they never face the problem I do, and they can just run S2 fast enough to figure stuff out in real time. And then there's a whole alien-to-me group of folks who are S1 first and think of S2 as this slightly painful part of themselves they can access when forced to, but would really rather not.

Comment by Gordon Seidoh Worley (gworley) on The Other Existential Crisis · 2024-09-22T02:43:06.722Z · LW · GW

What will I do when I grow up, if AI can do everything?

One interesting this about this question is that it comes from an implicit frame in which humans must do something to support their survival.

This is deeply ingrained in our biology and culture. As animals, we carry in us the well-worn drives to survive and reproduce, for which if we did not possess we not not exist because our ancestors would never have created the unbroken chain of billions of years that led to us. And with those drives comes the need to do something useful to those ends.

As humans, we are enmeshed in a culture that exists at the frontier of a long process of becoming ever better at working together to get better at surviving, because those cultures that did it better outcompeted those that were worse at it. And so we approach our entire lives with this question in our minds: what actions will I take that contribute to my survival and the survival of my society?

Transformative AI stands to break the survival frame, where the problem of our survival is put into the hands of beings more powerful than ourselves. And so then the question becomes, what do we do if we don't have to do anything to survive?

I imagine quite a lot of things! Consider what it is like to be a pet kept by humans. They have all their survival needs met for them. Some of them are so inexperienced at surviving that they'd probably die if their human caretakers disappeared, and others would make it but without the experience of years of caring for their own survival to make them experts at it. What do they do given they don't have to fight to survive? They live in luxury and happiness, if their caretakers love them and are skillful, or suffering and sorrow, if their caretakers don't or aren't.

So perhaps like a dog who lives to chase a ball or a cat who lives for napping in the sun, we will one day live to tell stories, to play games, or to simply enjoy the pleasures of being alive. Let us hope that's the world we manage to create!

Comment by Gordon Seidoh Worley (gworley) on I finally got ChatGPT to sound like me · 2024-09-17T17:53:37.065Z · LW · GW

Did you have to prompt it in any special ways to get it to do this?

I've tried this same experiment several times in the past because I have decades of writing that must be in the training set, but each time I didn't make progress because the fine tuning refused to recognize that I was a person it knew about and could make writing sound like, even though if prompted differently could give me back unique claims that I made in posts.

I've not tried again with the latest models. Maybe they'll do it now?

Comment by Gordon Seidoh Worley (gworley) on Head in the Cloud: Why an Upload of Your Mind is Not You · 2024-09-17T17:47:54.717Z · LW · GW

My high level take is that this essay is confused about what minds are and how computers actually work, and it ends up in weird places because of that. But that's not a very helpful argument to make with the author, so let me respond to two points that the conclusion seems to hinge on.

A mind upload does not encapsulate our brain’s evolving neuroplasticity and cannot be said to be an instantiation of a mind. 

This seems like a failure to imagine what types of emulations we could build to create a mind upload. Why is this not possible, rather than merely something that seems like a hard engineering problem to solve? As best I can tell, your argument is something like "computer programs are fragile and can't self heal", but this is also true of our bodies and brains for sufficient levels of damage, and most computer programs are fragile by design because they favor efficiency. Robust computer programs where you can could delete half of them and they'd still run are entirely possible to create. It's only a question of where resources are spent.

Likewise, it is not enough for a mind upload to behave in human-like ways for us to consider it sentient. It must have a physical, biological body, which it lacks by definition. 

This is nonsesnese. Uploads are still physically instantiated, just by different means. Your argument thus must hinge on the "biological body" claim, but you don't prove this point. To do so you'd need to provide an argument that there is something special about our bodies that cannot be successfully reproduced in a computer emulation even in theory.

It's quite reasonable to think current computers are not powerful enough to create a sufficiently detailed emulation to upload people today, but that does not itself preclude the development of future computers that are so capable. So you need an argument for why a computer of sufficient power to emulate a human body, including the brain, and an environment for it to live in is not possible at all, or would be impractical even with many orders of magnitude more compute (e.g. some problems can't be solved, even though it's theoretically possible, because they would require more compute than is physically possible to get out of the universe).


For what it's worth, you do hit on an important issue in mind uploading: minds are physically instantiated things that are embedded in the world, and attempts to uploads mind that ignore this aren't going to work. The mind is not even just the brain, it's a system that exists in conjunction with the whole body and the world it finds itself in such that it can't be entirely separated from it. But this is not necessarily a blocker to uploading minds. It's an engineering problem to be solved (or found to be unsolvable for some specific reasons), not a theoretical problem with uploads.

Comment by Gordon Seidoh Worley (gworley) on Forever Leaders · 2024-09-15T20:50:54.025Z · LW · GW

I was slightly tempted to downvote on the grounds that I don't want to see posts like this on LW, but the author is new so instead I'll leave this comment.

What I dislike about this post is that it's making an extremely obvious and long discussed observation. There's nothing wrong with new people having this old insight—in fact, having insights others have already had can be confirmation that your thinking is going in a useful direction—but I'm not excited to read about an idea that people have thought of since before I was born (e.g. Asimov's Foundation series arguably includes exactly this idea of what happens when a leader lives forever, for a slightly unusual definition of "lives").

My guess is that others feel the same and helps explain this post's lukewarm response.

I'd be more excited to read a post that explored some new angle on the idea.

Comment by Gordon Seidoh Worley (gworley) on Collapsing the Belief/Knowledge Distinction · 2024-09-12T16:22:11.888Z · LW · GW

You don't make clear what distinction between belief and knowledge you are arguing against, so I can't evaluate your claim that there's no distinction between them.

Comment by gworley on [deleted post] 2024-09-08T21:55:26.743Z

I'm curious, why write about Erikson? He's interesting from a historical perspective, but the field of developmental psychology has evolved a lot since then and has better models than Erikson did.

Comment by Gordon Seidoh Worley (gworley) on What Depression Is Like · 2024-08-29T06:23:08.049Z · LW · GW

it's not meant to be tricky or particularly difficult in any way, just tedious.

Tedium still doesn't land for me as a description of what depression is like. I avoid doing all kinds of tedious things as a non-depressed person because I value my time. For example, I find cooking tedious, so I use money to buy my way out of having to spend a lot of time preparing meals, but I'm not depressed in general or about food specifically.

Perhaps depression makes things feel tedious that otherwise would not because of a lack of motivation to do them. For example, I like sweeping the floor, but sweeping the floor would feel tedious if I didn't get satisfaction from having clean floors. I probably wouldn't like sweeping the floor if I were depressed and didn't care about the floors being clean.

Maybe I'm splitting hairs here, but it seems to me worth making a clear distinction between what it feels like to be depressed and what are common symptoms of depression. The lack of care seems to me like a good approximation of what it feels like; tediousness or puzzle solving seems more like a symptom that shows up for many people, but it not in itself what is like to be depressed, even if it is a frequent type of experience one has while depressed.

Comment by Gordon Seidoh Worley (gworley) on Why Large Bureaucratic Organizations? · 2024-08-29T06:11:34.279Z · LW · GW

I think there's something to what you say, but your model is woefully incomplete in ways that miss much of why large bureaucratic organizations exist.

  • Most organizations need to scale to a point where they will encouter principal-agent problems.
  • Dominance hierarchies offer a partial solution to principal-agent problems in that dominance can get agents to do what their principals want.
  • Dominance is not bad. Most people want to be at least partially dominated because by giving up some agency they get clear goals to accomplish in exchange, and that accomplishment gives them a sense of meaning.
    • Also they may care about the mission of the org but not know how to achieve its goals without someone telling them what to do.

Basically what I want to say is that dominance is instrumentally useful given human psychology and the goals of many organizations, and I think most organizations don't exist for the purpose of exerting dominance over other people except insofar as is necessary to achieve goals.

Comment by Gordon Seidoh Worley (gworley) on What Depression Is Like · 2024-08-28T22:46:20.886Z · LW · GW

I was depressed for most of my 20s. I can't say it felt anything like having to solve a puzzle to do things. It instead felt like I didn't care, lacked motivation, etc. Things weren't hard to do, I just didn't want to do them or think doing them would be worthwhile because I expected bad stuff to happen as a result of doing things instead of good stuff.

Your model also contradicts most models I'm aware of that describe depression, which fit more with my own experience of a lack of motivation or care or drive to do things.

To me it sounds like you're describing something that is comorbid with depression for you. I don't have ADHD, but what you're describing pattern matches to how I hear people with ADHD describe the experience of trying to make themselves do things: like most activities are like a puzzle in that they require lots of S2-type thinking to make them happen.

Comment by Gordon Seidoh Worley (gworley) on How I started believing religion might actually matter for rationality and moral philosophy · 2024-08-24T20:39:47.129Z · LW · GW

Sure. I'll do my best to give some more details. This is all from memory, and it's been a while, so I may end up giving ahistorical answers that mix up the timeline. Appologies in advance for any confusion this causes. If you have more questions or I'm not really getting at what you want to know, please follow up and I'll try again.

First, let me give a little extra context on the status thing. I had also not long before read Impro, which has a big section on status games, and that definitely informed how The e-Myth hit me.

So, there's this way in which managers play high and low. When managers play high they project high confidence. Sometimes this is needed, like when you need to motivate an employee to work on something. Sometimes it's counterproductive, like when you need to learn from an employee. Playing too high status can make it hard for you to listen and for the person you need to listen to to feel like you are listening to them and thus encourage them to tell you what you need to know. Think of the know-it-all manager who can do your job better than you, or the aloof manager uninterested in the details.

Playing low status is often a problem for managers, and not being able to play high is one thing that keeps some people out of management. No one wants to follow a low status leader. A manager doesn't necessarily need to be high status in the wider world, but they at least need to be able to claim higher status than their employees if those employees are going to want to do what they say.

The trouble is, sometimes managers need to play high playing low, like when a manager listens to their employee to understand the problems they are facing in their work, and actually listen rather than immediately dismiss the concerns or round them off to something they've dealt with before. A key technique can be literally lowering oneself, like crouching down to be at eye level of someone sitting at a desk, as this non-verbally makes it clear that the employee is now in the driver seat and the manager is along for the ride.

Effective managers know how to adjust their status when needed. The bests are naturals who never had to be taught. Second best are those who figure out the mechanics and can deploy intentional status play changes to get desired outcomes. I'm definitely not in the first camp. To any extent I'm successful as a manger, it's because I'm in the second.

Ineffective managers, by contrast, just don't understand any of this. They typically play high all the time, even at inappropriate times. That will keep a manager employed, but they'll likely be in the bottom quartile of manager quality, and will only succeed in organizations where little understanding and adaptation is needed. The worst is low playing high status (think Michael Scott in The Office). You only stay a manager if you are low playing high due to organizational disfunction.

Okay, so all that out of the way, the way this worked for me was mostly in figuring out how to play high straight. I grew up with the idea that I was a smart person (because I was in fact more intelligent than lots of people around me, even if I had less experience and made mistakes due to lack of knowledge and wisdom). The archetypal smart person that most closely matched who I seemed to be was the awkward professor type who is a genius but also struggles to function. So I leaned into being that type of person and eschewed feedback I should be different because it wasn't in line with the type of person I was trying to be.

This meant my default status mode was high playing low playing high, by which I mean I saw myself as a high status person who played low, not because he wanted to, but because the world didn't recognize his genius, but who was going to press ahead and precociously aim for high status anyway. Getting into leadership, this kind of worked. Like I had good ideas, and I could convince people to follow them because they'd go "well, I don't like the vibe, but he's smart and been right before so let's try it", but it didn't always work and I found that frustrating.

At the time I didn't really understand what I was doing, though. What I realized, in part, after this particular insight, was that I could just play the status I wanted to straightforwardly. Playing multilayer status games is a defense mechanism, because if any one layer of the status play is challenges, you can fall back one more layer and defend from there. If you play straight, you're immediately up against a challenge to prove you really are what you say you are. So integration looked like peeling back the layers and untangling my behaviors to be more straightforward.

I can't say I totally figured it out from just this one insight. There was more going on that later insights would help me untangle. And I still struggle with it despite having a thorough theory and lots of experience putting it into play. My model of myself is that my brain literally runs slow, in that messages seem to propagate across it less quickly than they do for other people, as suggested by my relatively poor reaction times (+2 sd), and this makes it difficult for me to do high-bandwidth real-time processing of information like is required in social settings like work. All this is to say that I've had to dramatically over-solve almost every problem in my life to achieve normalcy, but I expect most people wouldn't need so much as I have. Make of this what you will when thinking about what this means for me to have integrated insights: I can't rely on S2 thinking to help me in the moment; I have do things with S1 or not at all (or rather with a significant async time delay).