Posts

We Can Build Compassionate AI 2025-02-25T16:37:06.160Z
Teaching Claude to Meditate 2024-12-29T22:27:44.657Z
Which things were you surprised to learn are metaphors? 2024-11-22T03:46:02.845Z
Fundamental Uncertainty: Epilogue 2024-11-16T00:57:48.823Z
Fundamental Uncertainty: Chapter 9 - How do we live with uncertainty? 2024-11-07T18:15:45.049Z
Word Spaghetti 2024-10-23T05:39:20.105Z
Can UBI overcome inflation and rent seeking? 2024-08-01T00:13:51.693Z
Finding the Wisdom to Build Safe AI 2024-07-04T19:04:16.089Z
How was Less Online for you? 2024-06-03T17:10:33.766Z
Fundamental Uncertainty: Chapter 8 - When does fundamental uncertainty matter? 2024-04-26T18:10:26.517Z
Dangers of Closed-Loop AI 2024-03-22T23:52:22.010Z
On "Geeks, MOPs, and Sociopaths" 2024-01-19T21:04:48.525Z
A discussion of normative ethics 2024-01-09T23:29:11.467Z
Extrapolating from Five Words 2023-11-15T23:21:30.865Z
Fundamental Uncertainty: Chapter 1 - How can we know what's true? 2023-08-13T18:55:44.861Z
Physics is Ultimately Subjective 2023-07-14T22:19:01.151Z
Optimal Clothing 2023-05-31T01:00:37.541Z
How much do personal biases in risk assessment affect assessment of AI risks? 2023-05-03T06:12:57.001Z
Fundamental Uncertainty: Chapter 7 - Why is truth useful? 2023-04-30T16:48:58.312Z
Industrialization/Computerization Analogies 2023-03-27T16:34:21.659Z
Fundamental Uncertainty: Chapter 6 - How can we be certain about the truth? 2023-03-06T13:52:09.333Z
Feelings are Good, Actually 2023-02-21T02:38:11.793Z
How much is death a limit on knowledge accumulation? 2023-02-14T03:54:16.070Z
Acting Normal is Good, Actually 2023-02-10T23:35:41.043Z
Religion is Good, Actually 2023-02-09T06:34:12.601Z
Drugs are Sometimes Good, Actually 2023-02-08T02:24:24.152Z
Sex is Good, Actually 2023-02-05T06:33:26.027Z
Small Talk is Good, Actually 2023-02-04T00:38:21.935Z
Exercise is Good, Actually 2023-02-02T00:09:18.143Z
Nice Clothes are Good, Actually 2023-01-31T19:22:06.430Z
Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact 2023-01-19T01:15:09.693Z
MacArthur BART (Filk) 2023-01-02T22:50:04.248Z
Fundamental Uncertainty: Chapter 5 - How do we know what we know? 2022-12-28T01:28:50.605Z
[Fiction] Unspoken Stone 2022-12-20T05:11:23.231Z
The Categorical Imperative Obscures 2022-12-06T17:48:01.591Z
Contingency is not arbitrary 2022-10-12T04:35:07.407Z
Truth seeking is motivated cognition 2022-10-07T19:19:27.456Z
Quick Book Review: Crucial Conversations 2022-09-19T06:25:23.052Z
Keeping Time in Epoch Seconds 2022-09-10T00:28:08.137Z
Fundamental Uncertainty: Chapter 4 - Why don't we do what we think we should? 2022-08-29T19:25:16.917Z
Fundamental Uncertainty: Chapter 3 - Why don't we agree on what's right? 2022-06-25T17:50:37.565Z
Fundamental Uncertainty: Chapter 2 - Why do words have meaning? 2022-04-18T20:54:24.539Z
Modect Englich Cpelling Reformc 2022-04-16T23:38:50.212Z
Good Heart Donation Lottery Winner 2022-04-08T20:34:41.104Z
How I Got So Much GHT 2022-04-07T03:59:36.538Z
What are rationalists worst at? 2022-04-06T23:00:08.600Z
My Recollection of How This All Got Started 2022-04-06T03:22:48.988Z
You get one story detail 2022-04-05T04:38:36.022Z
Software Engineering: Getting Hired and Promoted 2022-04-04T22:31:52.967Z
My Superpower: OODA Loops 2022-04-04T01:51:46.622Z

Comments

Comment by Gordon Seidoh Worley (gworley) on Weirdness Points · 2025-02-28T23:52:58.994Z · LW · GW

So while your point is mostly true, I want to highlight there are some situations where simply asking people to respect your food norms is a problem, and they mostly arise in a specific sort of culture that is especially communal with regard to food and sees you as part of the ingroup.

For example, it's a traditional upper-class Anglo norm that it's rude to put your hosts out by asking them to make you something special to accommodate your diet. You're expected to get along and eat what everyone else eats. You will be accommodated if you ask, but you will also be substantial downgraded in how willing to get along you are, and you'll be a less desired dinner guest, and thus get fewer invites and be less in.

I've heard of similar issues in some East Asian cultures where going vegan is seen as an affront to the family. "What do you mean you won't eat my cooking?!? Do you think you're better than your mother???!"

The problem is that food is tied with group membership, and you're expected to eat the same food as the rest of the ingroup. If you're not a rare outsider guest, you'll be seen as defecting on group cohesion.

But most Westerners are not part of cultures like these. Western culture is highly atomized, and everyone is seen as a unique individual, so it's not unusual that individuals might have unique food needs, and it becomes polite and a sign of a good host to accommodate everybody. But this is historically an unusual norm to have within the ingroup.

Comment by Gordon Seidoh Worley (gworley) on We Can Build Compassionate AI · 2025-02-26T21:38:08.835Z · LW · GW

Okay, you just doubled down, so clearly this discussion isn't going anywhere. It's also off topic anyway since it's not directly addressing the claims of this post.

Please kindly refrain from continuing this line for discussion on this post. I'd welcome additional comments if you wanted to address the claims of the post directly, though.

Comment by Gordon Seidoh Worley (gworley) on We Can Build Compassionate AI · 2025-02-26T16:55:31.752Z · LW · GW

Your claims overgeneralize and it makes them false. To assure I'm not just biased because I am religious myself, here's Claude's take on your comment when I asked it to fact check it.

Prompt: can you fact check this comment that was posted to an online forum (not mine but i suspect it might not be correct but don't trust myself not to be biased so looking for your take) [your comment copied in]

Response:

I'll analyze this forum comment by examining its claims about religion, compassion, and philosophical history.

The comment contains several sweeping generalizations and historical inaccuracies:

1. **Religious texts and violence**: While Abrahamic texts do contain violent passages, characterizing the "overwhelming majority" as "justifications for genocide and ethnic supremacy" is factually incorrect. These texts contain diverse content including ethical teachings, poetry, historical narratives, and legal codes. The violent passages represent a minority of the content.

2. **"2,000 years of the worst violence in history"**: This statement ignores that violence has existed in all human societies regardless of religion. It also overlooks that many historical atrocities were driven by non-religious ideologies (e.g., 20th century totalitarian regimes).

3. **Religious monopoly on compassion**: While some religious groups do claim exclusive moral authority, many traditions explicitly teach universal compassion that extends beyond group boundaries. The comment oversimplifies complex theological positions across diverse traditions.

4. **Platonic origins claim**: The assertion that Abrahamic religions derived their concepts of compassion and empathy primarily from Plato is historically questionable. While Hellenistic philosophy influenced later Jewish and Christian thought, these traditions also drew from their own cultural and textual sources that pre-dated significant Greek influence.

5. **"Universal religion"**: This term is never clearly defined, making many of the claims difficult to evaluate precisely.

The comment does raise legitimate concerns about religious exclusivism and historical misuse of religion to justify violence, but its broad generalizations undermine its credibility as an objective analysis of religion's relationship to compassion and empathy.

Point 5 is obviously an artifact of me failing to give Claude context on what universal religion means, and I didn't define it in the article, but I think it's clear what I mean: religions that see it as their purpose to apply to all people, not just to a single ethnic group or location.

Comment by Gordon Seidoh Worley (gworley) on List of most interesting ideas I encountered in my life, ranked · 2025-02-23T16:05:23.105Z · LW · GW

Ranked in order of how interesting they were to me when I got interested in them, which means in approximately chronological order because the more ideas I knew the less surprising new ideas were (since they were in part predicted by earlier ideas that had been very interesting).

  1. Cybernetics
  2. Evolution
  3. Game Theory
  4. Developmental psychology
  5. Dependent Origination
  6. The Problem of the Criterion
Comment by Gordon Seidoh Worley (gworley) on Reflections on the state of the race to superintelligence, February 2025 · 2025-02-23T16:00:50.349Z · LW · GW

While history suggests we should be skeptical, current AI models produce real results of economic value, not just interesting demos. This suggests that we should be willing to take more seriously the possibility that they will be produce TAI since they are more clearly on that path and already having significant transformative effects on the world.

Comment by Gordon Seidoh Worley (gworley) on Those of you with lots of meditation experience: How did it influence your understanding of philosophy of mind and topics such as qualia? · 2025-02-20T15:31:47.880Z · LW · GW

I think it's a mistake in many cases to let philosophy override what you care about. That's letting S2 do S1's job.

 

I'm not saying no one should ever be able to be convinced to care about something, only that the convincing, even if a logical argument is part of it, should not be all of it.

Comment by Gordon Seidoh Worley (gworley) on Those of you with lots of meditation experience: How did it influence your understanding of philosophy of mind and topics such as qualia? · 2025-02-19T14:30:41.524Z · LW · GW

I don't think a philosophy of mind is necessary for this, no, although I can see why it might seem like it is if you've already assumed that philosophy is necessary to understand the world.

It's enough to just be able to model other minds in the world to know how to show them compassion, and even without modeling compassion can be enacted, even if it's not known to be compassionate behavior. This modeling need not rise to the level of philosophy to get the job done.

Comment by Gordon Seidoh Worley (gworley) on SWE Automation Is Coming: Consider Selling Your Crypto · 2025-02-14T00:02:36.678Z · LW · GW

I'm a SWE, use AI everyday to do my job, and I think the idea that AI is the cause of reduced engineer hiring is basically false.

There is probably some marginal effect, but I instead think what we're seeing today is because:

  • interest rates are high relative to the boom years of 2012-2022
  • this pushes the risk free rate up
  • this means VCs can be more conservative
  • thus they demand portfolio companies spend money more efficiently
  • an easy way to be more efficient is to employ higher productivity SWEs
  • this cuts out the bottom of the market because it falls below a productivity bar

If interest rates were still 0%, companies could afford to hire lower productivity engineers and things would be more similar to how they were in the past. Also, on this argument, if AI makes engineers more productive, we'd also expect AI to be putting more people over the productivity bar, and thus mitigating the higher risk free rate effects. Thus, it seems, if anything, AI is having less of a real impact than it seems like it is.

I don't know if SWE automation is coming. Programming automation is already here. Whether that puts engineers out of work remains to be seen (so far, no).

Comment by Gordon Seidoh Worley (gworley) on ≤10-year Timelines Remain Unlikely Despite DeepSeek and o3 · 2025-02-13T21:03:08.189Z · LW · GW

For example, I suspect philosophical intelligence was a major driver behind Eliezer's success (and not just for his writing about philosophy). Conversely, I think many people with crazy high IQ who don't have super impressive life achievements (or only achieve great things in their specific domain, which may not be all that useful for humanity) probably don't have super high philosophical intelligence.

Rather than "philosophical intelligence" I might call this "ability to actually win", which is something like being able to keep your thoughts in contact with reality, which is surprisingly hard to do for most complex thoughts that get tied up into one's self beliefs. Most people get lost in their own ontology and make mistakes because they let the ontology drift free from reality to protect whatever story they're telling about themselves or how they want the world to be.

Comment by Gordon Seidoh Worley (gworley) on ≤10-year Timelines Remain Unlikely Despite DeepSeek and o3 · 2025-02-13T20:59:43.196Z · LW · GW

AI will not kill everyone without sequential reasoning.

This statement might be literally true, but only because of a loophole like "AI needs humans to help it kill everyone". Like we're probably not far away from, or may already have, the ability to create novel biological weapons, like engineered viruses, that could kill all humans before a response could be mustered. Yes, humans have to ask the LLM to help it create the thing and then humans have to actual do the lab work and deployment, but from an outside view (which is especially important from a policy perspective), this looks a lot like "AI could kill everyone without sequential reasoning".

Comment by Gordon Seidoh Worley (gworley) on Those of you with lots of meditation experience: How did it influence your understanding of philosophy of mind and topics such as qualia? · 2025-02-03T16:26:48.261Z · LW · GW

Yes

Comment by Gordon Seidoh Worley (gworley) on Those of you with lots of meditation experience: How did it influence your understanding of philosophy of mind and topics such as qualia? · 2025-01-31T08:11:19.109Z · LW · GW

I basically don't care about philosophy of mind anymore, mostly because I don't care about philosophy anymore.

Philosophy, as a project, is usually about two things. One, figure out metaphysics. Two, figure out a correct ontology for reality.

Both of these are flawed projects. Metaphysics is that which we can't know from experience, so it's all speculative and also unnecessary, because we can model the world adequately without supposing to know how it works beyond our ability to observe it. Fake metaphysics is helpful contingently because it lets you have fake models that are easier to reason about, but that's the main use case.

As for finding a correct ontology, we know the map is not the territory, and further there are many possible maps. All models are wrong, some are useful.

I did still care about philosophy a lot right up until "I" switched into PNSE, which happened after several thousands hours of meditation practice, and a lot of other things, too.

Basically what I can say is, the whole idea of philosophy of mind is confused, because it supposes mind to be something separate from reality itself. But the world is only known through mind, and so the world is mind. The appearance of an external world is a useful model for predicting future experiences, and it works well to think and behave as if there is an external reality, because that's a metaphysical belief that pays substantial rent. But, epistemologically speaking, external reality is not prior to experience, and thus deeper questions about consciousness are mostly confused because they mix up causal dependency in one's ontology.

Comment by Gordon Seidoh Worley (gworley) on Everywhere I Look, I See Kat Woods · 2025-01-16T16:09:29.550Z · LW · GW

This is gonna sound mean, but the quality of EA-oriented online spaces has really gone downhill in the last 5 years. I barely even noticed Kat Woods' behavior, because she is just one more in a sea of high volume, low quality content being posted in EA spaces.

That's what I've given up mostly on EA sites and events, other than attending EA Global (can't break my streak), and just hang out here on Less Wrong where the vibes are still good and the quality bar is higher.

Comment by Gordon Seidoh Worley (gworley) on Why modelling multi-objective homeostasis is essential for AI alignment (and how it helps with AI safety as well) · 2025-01-14T03:37:50.270Z · LW · GW

A couple notes:

  • I expect future AI to be closed-loop
  • Closed-loop AI is more dangerous than open-loop AI
  • That said, closed-loops allow the possibility of homeostasis, which open-loop AI does not
  • I agree that homeostatic processes, specifically negative-feedback loops, are why ~everything in the universe says in balance. If positive feedback wasn't checked there wouldn't be anything interesting in the world.
  • AI is moving towards agents. Agents are, by their nature, homeostatic processes, at least for the duration of their time trying to achieve a goal.
  • Even if we can't align open-loop systems like LLMs, maybe we can align closed-loop systems by preventing run-away positive feedback loops.
Comment by Gordon Seidoh Worley (gworley) on Teaching Claude to Meditate · 2024-12-30T17:45:41.350Z · LW · GW

No, I've only tried it with Claude so far. I did think about trying other models to see how it compares, but I think Claude gave me enough info that trying to do this in chat is unlikely to be useful. I got enough info to feel like, in theory, teaching LLMs to meditate is not exactly a useful thing to do, but if it is then it needs to happen as part of training.

Comment by Gordon Seidoh Worley (gworley) on Being Present is Not a Skill · 2024-12-29T08:18:35.922Z · LW · GW

Memory reconsolidation

Comment by Gordon Seidoh Worley (gworley) on No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate! · 2024-12-28T20:54:39.375Z · LW · GW

Also, more generally, no prediction market price means you can immediately conclude what the probability of any outcome is, because most markets we have only subjective probability (maybe this is always true but I'm trying to ignore things like fair coin flips that have agreed upon "objective" probabilities), so there is no fact of the matter about what the real probability of something happening is, only the subjective probability based on the available information.

Instead a prediction market is simply, in the ideal case, the market clearing price at which people are willing to take bets on either side of the question at this moment in time. This price represents a marginal trading point—participants with higher subjective probabilities than the market price will buy, while those with lower will sell. This is importantly different from the true probability of an outcome, and it's a general mistake people make to treat them as such.

Then there are other factors, like you mention with interest, but also issues with insufficient volume, large traders intentionally distorting the market, etc. that can make the market clearing price less useful for inferring what subjective probability an observer should treat a possible outcome as having.

Instead a prediction market provides aggregate information that can be used for a person to make their own assessment of the subjective probability of an outcome, and if they differ from the market in their assessment they can make a bet that will be subjectively positive value in expectation, but still in no way is the market price of any prediction market the probability of any outcome.

Comment by Gordon Seidoh Worley (gworley) on The average rationalist IQ is about 122 · 2024-12-28T20:40:17.250Z · LW · GW

Honestly, this fits my intuition. If I think of all the rationalists I know, they feel like they are on average near 120 IQ, with what feels like a standard distribution around it, though in reality it's probably not quite normal with a longer upper tail than lower tail, i.e. fewer 90s than 150s, etc. Claims that the average is much higher than 120 feel off to me, relative to folks I know and have interacted with in the community (insert joke about how I have "dumb" friends maybe).

Comment by gworley on [deleted post] 2024-12-28T05:07:07.237Z

Mine:

The world is perfect, meaning it is exactly as it is and always was going to be. However, the world as we know it is an illusion in that it only exists in our minds. We only know our experience, and all (metaphysical) claims to know reality, no matter how useful and predictive they are, are contingent and not fundamental. But we get confused about this because those beliefs are really useful and really predictive, and we separate ourselves from reality by first thinking the world is real, and then thinking our beliefs are about the world rather than of the world itself.

Thus the first goal of all self-aware beings is to get straight in their mind that everything is an illusion. This changes nothing about daily life because everything adds up to normality, but we are no longer confused. Knowing that all is illusion eliminates our fundamental source of suffering that's created by seeing ourselves as separate from the world, and thus we allow ourselves to return to the original joy of experience.

Having gotten our minds straight, now we can approach the task of shaping the world (which is, again, an illusion we construct in our minds, and is only very probably a projection of some external reality into our minds) to better fit our preferences. We can take our preferences far. They weren't designed to be maximized, but nonetheless we can do better than we do today. We can build machines and social technologies and communities (or at least create the illusion of these things in the very ordinary way we create all our illusions) to make possible the world we more want to live in. And everyone can do this, for they are not separate from us. Their preferences are our own; ours theirs. Together we can create a beautiful illusion free of pain and strife and full of flourishing.

Comment by Gordon Seidoh Worley (gworley) on Why don't we currently have AI agents? · 2024-12-26T20:36:18.661Z · LW · GW

I can't help but wonder if part of the answer is that they seem dangerous and people are selecting out of producing them.

Like I'm not an expert but creating AI agents seems extremely fun and appealing, and I'm intentionally working on it none because it seems safer not to build them. (Whether you think my contributions to trying to build them would matter or not is another question.)

Comment by Gordon Seidoh Worley (gworley) on What are the strongest arguments for very short timelines? · 2024-12-24T19:36:43.122Z · LW · GW

Most arguments I see in favor of AGI ignore economic constraints. I strongly suspect that we can't actually afford to create AGI yet; world GDP isn't high enough. They seem to be focused on inside-view arguments for why method X will make it happen, which sure, maybe, but even if we achieve AGI, if we aren't rich enough to run it or use it for anything it hardly matters.

So the question in my mind is, if you think AGI is soon, how are we getting the level of economic growth needed in the next 2-5 years to afford to use AGI at all before AGI is created?

Comment by Gordon Seidoh Worley (gworley) on Vegans need to eat just enough Meat - emperically evaluate the minimum ammount of meat that maximizes utility · 2024-12-24T02:51:18.764Z · LW · GW

Just to verify, you were also eating rice with those lentils? I'd expect to be differently protein deficient if you only eat lentils. The right combo is beans and rice (or another grain).

Comment by Gordon Seidoh Worley (gworley) on Vegans need to eat just enough Meat - emperically evaluate the minimum ammount of meat that maximizes utility · 2024-12-23T00:10:05.452Z · LW · GW

If someone has gone so far as to buy supplements, they have already done far more to engineer their nutrition than the vegans who I've known who struggle with nutrition.

Comment by Gordon Seidoh Worley (gworley) on Good Reasons for Alts · 2024-12-23T00:08:11.997Z · LW · GW

I generally avoid alts for myself, and one of the benefits I see is that I feel the weight of what I'm about to post.

Maybe I would sometimes writer funnier, snarkier things on Twitter that would get more likes, but because my name is attached I'm forced to reconsider. Is this actually mean? Do I really believe this? Does this joke go to far?

Strange to say perhaps, but I think not having alts makes me a better person, in the sense of being better at being the type of person I want to be, because I can't hide behind anonymity.

Comment by Gordon Seidoh Worley (gworley) on The nihilism of NeurIPS · 2024-12-22T23:59:12.499Z · LW · GW

Thanks for writing this up. This is something I think a lot of people are struggling with, and will continue to struggle with as AI advances.

I do have worries about AI, mostly that it will be unaligned with human interests and we'll build systems that squash us like bugs because they don't care if we live or die. But I have no worries about AI taking away our purpose.

The desire to feel like one has a purposes is a very human characteristic. I'm not sure that any other animals share our motivation to have a motivation. In fact, past humans seemed to have less of this, too, if reports of extant hunter-gatherer tribes are anything to go by. But we feel like we're not enough if we don't have a purpose to serve. Like our lives aren't worth living if we don't have a reason to be.

Maybe this was a historically adaptive fear. If you're in a small band or living in a pre-industrial society, every person had a real cost to existing. Societies existed up against the Malthusian limit, and there was no capacity to feed more mouths. You either contributed to society, or you got cast out, because everyone was in survival mode, and surviving is what we had to do to get here.

But AI could make it so that literally no one has to work ever again. Perhaps we will have no purpose to serve to ensure our continued survival if we get it right. Is that a problem? I don't think it has to be!

Our minds and cultures are build around the idea that everyone needs to contribute. People internalize this need, and one way it can come out is as feeling like life is not worth living without purpose.

But you do have a purpose, and it's the same one all living things share: to exist. It is enough to simply be in the world. Everything else is contingent on what it takes to keep existing.

If AI makes it so that no one has to work, that most of us our out of jobs, that we don't even need to contribute to setting our own direction, that need not necessarily be bad. It could go badly, yes, but it also could be freeing to be as we wish, rather than as we must.

I speak from experience. I had a hard time seeing that simply being is enough. I've also met a lot of people who had this same difficulty, because it's what draws them to places like the Zen center where I practice. And everyone is always surprised to discover, sometimes after many years of meditation, that there was never anything that needed to be done to be worthy of this life, and if we can eliminate the need to do things to get to keep living this life, so that none may need lose it due to accident or illness or confusion or anything else, then all the better.

Comment by Gordon Seidoh Worley (gworley) on Vegans need to eat just enough Meat - emperically evaluate the minimum ammount of meat that maximizes utility · 2024-12-22T23:41:45.408Z · LW · GW

I want to push back a little in that I was fully vegan for a few years with no negative side effects, other than sometimes being hungry because there was nothing I would eat and annoying my friends with requests to accommodate my dietary preferences. I even put on muscle and cut a lot of fat from my body!

I strongly suspect, based on experience with lots of other vegans, that vegans who struggle with nutritional deficiencies are bad at making good choices about macro nutrients.

Broadly speaking, the challenge in a vegan diet is getting enough lysine. Most every other nutrient you need is found in abundance, but lysine is tricky because humans mostly get that amino acid from meat. Getting enough isn't that hard if you know what to eat, but you have to eat enough of it in enough volume to avoid problems.

What does it take to get enough lysine? Beans, lots of beans! If you're vegan and not eating beans you are probably lysine deficient and need to eat more beans. How many beans? Way more than you think. Beans have lots of fiber and aren't nutrient dense like meat.

I met lots of vegans who didn't eat enough beans. They'd eat mushrooms, but not enough, and lots of other protein sources, but not ones with enough lysine. They'd just eat a random assortment of vegan things without really thinking hard about if they were eating the right things. It's a strategy that works if you eat a standard diet that's been evolved by our culture to be relatively complete, but not eating a constructed diet like modern vegans do.

Now, I have met a few people who seem to have individual variation issues that make it hard for them to eat vegan and stay healthy. In fact, I'm now one of those, because I developed some post-COVID food sensitivities that forced me to go vegetarian and then start eating meat when that wasn't enough. And some people seem to process protein differently in a way that is weird to me but they insist if they don't eat some meat every 4 hours or so they feel like crap.

So I'm not saying there aren't some people who do need to eat meat and just reduce the amount and that's the best they can safely do, but I'm also saying that I think a lot of vegans screw up not because they don't eat meat but because they don't think seriously enough about if they are getting enough lysine every day.

Comment by Gordon Seidoh Worley (gworley) on Being Present is Not a Skill · 2024-12-21T05:04:43.077Z · LW · GW

What would it mean for this advice to not generalize? Like what cases are you thinking of where what someone needs to do to be more present isn't some version of resolving automatic predictions of bad outcomes?

I ask because this feels like a place where disagreeing with the broad form of the claim suggests you disagree with the model of what it means to be present rather than that you disagree with the operationalization of the theory, which is something that might not generalize.

Comment by Gordon Seidoh Worley (gworley) on Being Present is Not a Skill · 2024-12-21T04:56:55.047Z · LW · GW

I think you still have it wrong, because being present isn't a skill. It's more like an anti-skill: you have stop doing all the stuff you're doing that keeps you from just being.

There is, instead, a different skill that's needed to make progress towards being present. It's a compound skill around noticing what you do out of habit rather than in response to present conditions, figuring out why you have those habits, practice not engaging in those habits when you otherwise would, and thereby developing trust that you can safely drop those habits, thus retraining yourself to do less out of habit and be closer to just being and responding.

Comment by Gordon Seidoh Worley (gworley) on Information vs Assurance · 2024-12-09T18:09:55.268Z · LW · GW

I can't think of a time where such false negatives were a real problem. False positives, in this case, are much more costly, even if the only cost is reputation.

If you never promise anything that could be a problem. Same if you make promises but no one believes them. Being able to make commitments is sometimes really useful, so you need to at least keep live the ability to make and hit commitments so you can use them when needed.

Comment by Gordon Seidoh Worley (gworley) on Being at peace with Doom · 2024-12-06T01:57:20.194Z · LW · GW

As AI continues to accelerate, the central advice presented in this post to be at peace with doom will become incresingly important to help people stay sane in a world where it may seem like there is no hope. But really there is hope so long as we keep working to avert doom, even if it's not clear how we do that, because we've only truly lost when we stop fighting.

Comment by Gordon Seidoh Worley (gworley) on Recreating the caring drive · 2024-12-06T01:54:36.299Z · LW · GW

I'd really like to see more follow up on the ideas made in this post. Our drive to care is arguably why we're willing to cooperate, and making AI that cares the same way we do is a potentially viable path to AI aligned with human values, but I've not seen anyone take it up. Regardless, I think this is an important idea and think folks should look at it more closely.

Comment by Gordon Seidoh Worley (gworley) on You don't get to have cool flaws · 2024-12-06T01:52:48.917Z · LW · GW

This post makes an easy to digest and compelling case for getting serious about giving up flaws. Many people build their identity around various flaws, and having a post that crisply makes the case that doing so is net bad is helpful to be able to point people at when you see them suffering in this way.

Comment by Gordon Seidoh Worley (gworley) on Teleosemantics! · 2024-12-06T01:50:45.801Z · LW · GW

I think this post is important because it brings old insights from cybernetics into a modern frame that relates to how folks are thinking about AI safety today. I strongly suspect that the big idea in this post, that ontology is shaped by usefulness, matters greatly to addressing fundamental problems in AI alignment.

Comment by Gordon Seidoh Worley (gworley) on Orca communication project - seeking feedback (and collaborators) · 2024-12-03T18:10:34.117Z · LW · GW

I'm less confident than you are about your opening claim, but I do think it's quite likely that we can figure out how to communicate with orcas. Kudos for just doing things.

I'm not sure how it would fit with their mission, but maybe there's a way you could get funding from EA Funds. It doesn't sound like you need a lot of money.

Comment by Gordon Seidoh Worley (gworley) on 2024 Unofficial LessWrong Census/Survey · 2024-12-03T06:38:11.254Z · LW · GW

Completed

Comment by Gordon Seidoh Worley (gworley) on Which Biases are most important to Overcome? · 2024-12-01T18:53:42.162Z · LW · GW

The Typical Mind Fallacy is the most important bias in human reasoning.

How do I know? Because it's the one I struggle with the most!

Comment by Gordon Seidoh Worley (gworley) on What epsilon do you subtract from "certainty" in your own probability estimates? · 2024-11-27T06:49:29.374Z · LW · GW

Back when I tried playing some calibration games, I found I was not able to get successfully calibrated above 95%. At that point I start making errors from things like "misinterpreting the question" or "randomly hit the wrong button" and things like that.

The math is not quite right on this, but from this I've adopted a personal 5% error margin policy, this seems to practically be about the limit of my ability to make accurate predictions, and it's served me well.

Comment by Gordon Seidoh Worley (gworley) on Which things were you surprised to learn are metaphors? · 2024-11-27T06:44:06.671Z · LW · GW

What does this mean?

Comment by Gordon Seidoh Worley (gworley) on Which things were you surprised to learn are not metaphors? · 2024-11-22T03:46:58.172Z · LW · GW

I like this question a lot, but I'm more interested in its opposite, so I asked it!

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-21T16:50:58.369Z · LW · GW

Yes, this is why I like the movie better than the short story. PKD did more of what Total Recall did in other stories, like Ubik and A Scanner Darkly and The Man Who Japed, but never sends it fully the way Total Recall does.

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-20T20:59:41.812Z · LW · GW

Quest (1984)

This movie was written by Ray Bradbury.

It's about people who have 8 day lifespans, and follows the story of a boy who grows up to fulfill a great quest. I like it from a rationalist standpoint because it has themes similar to those we have around AI, life extension, and more: we have a limited to achieve something, and if we don't pull it off we are at least personally doomed, and maybe societally, too.

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-20T20:56:05.807Z · LW · GW

PT Barnum (1999)

This is a made for TV movie that can easily be found for free on YouTube.

I like it because it tells a somewhat fictionalized account of PT Barnum's life that shows him as an expert in understanding the psychology of people and figuring out how to give them products they'll love. Some might say what he does is exploitative, but the movie presents him as not much different than modern social media algorithms that give us exactly what we want, even if we regret it in hindsight.

The rationalist angle is coming away with a sense of what's it's like to be a live player who is focused on achieving something and in deep contact with reality to achieve it, willing to ignore social scripts in order to get there.

Comment by Gordon Seidoh Worley (gworley) on What are the good rationality films? · 2024-11-20T20:46:35.575Z · LW · GW

Total Recall (1990)

Based on the Phillip K. Dick short story "We Can Remember It For You Wholesale". The movie is better than the short story.

I can't tell you why this is a rationality movie without spoilers...

The movie masterfully sucks you into a story where you don't know if you're watching what's really happening, or if you're watching the false memories inserted into the protagonists mind at the start of the film. Much of the fun for rationalists would be trying to figure out if the film was reality or implanted memory.

Comment by gworley on [deleted post] 2024-11-20T06:15:13.258Z

It's not quite like the dot com bust. Bottom of the market is very soft, with new grads basically having no options, but the top of the market is extremely tight, with the middle doing about like normal. Employers feel they can be more choosy right now for all roles, though, so they are being. That will change if roles sit unfilled for longer.

Comment by Gordon Seidoh Worley (gworley) on The Choice Transition · 2024-11-18T18:52:22.096Z · LW · GW

How would you compare your ideas here to Asimov's fictional science of psychohistory? I ask because while reading this post I kept getting flashbacks to Foundation.

Comment by Gordon Seidoh Worley (gworley) on Fundamental Uncertainty: Chapter 9 - How do we live with uncertainty? · 2024-11-08T01:03:56.891Z · LW · GW

Yes, red is perhaps the most useful to color to be able to see! That's why I chose to use it in this example.

Comment by gworley on [deleted post] 2024-11-05T03:14:31.323Z

I don't know, but I can say that after a lot of hours of Alexander lessons my posture and movement improved in ways that would be described as "having less muscle tension" and this having less tension happened in conjunction with various sorts of opening and being more awake and moving closer to PNSE.

Comment by Gordon Seidoh Worley (gworley) on Death notes - 7 thoughts on death · 2024-10-29T03:25:36.676Z · LW · GW

Thank you for sharing your thoughts, and sorry for your losses. It's often hard to talk about death, especially about the deaths of those we love. I don't really have anything other to say than that I found this moving to read, and I'm glad you shared it with us.

Comment by Gordon Seidoh Worley (gworley) on somebody explain the word "epistemic" to me · 2024-10-28T17:16:29.344Z · LW · GW

Here's more answer than you probably wanted.

First up, the word "epistemic" solves a limitation of the word "knowledge" in that it doesn't easily turn into an adjective. Yes, like all nouns in English it can be used like an adjective in the creation of noun phrases, but "knowledge state" and "knowledge status" don't sound as good.

But more importantly there's a strong etymological reason to prefer the word "epistemic" in these cases. "Epistemic" comes from "episteme", one of Greek's words for knowledge[1]. Episteme is knowledge that is justified by observation and reason, and importantly is known because the knower was personally convinced of the justification, as opposed to gnosis, where the only justification is experience, or doxa, which is second-hand knowledge[2].

Thus "epistemic" carries with it the connotation of being related to justified beliefs. An "epistemic state" or "epistemic status" implies a state or status related to how justified one's beliefs are.

  1. ^

    "Knowledge" is cognate with another Greek word for knowledge, "gnosis", but the two words evolved along different paths from PIE *gno-, meaning "know".

  2. ^

    We call doxa "hearsay" in English, but because of that word's use in legal contexts, it carries some pejorative baggage related to how hearsay is treated in trials. To get around this we often avoid the word "hearsay" and instead focus on our level of trust in the person we learned something from, but won't make a clear distinction between hearsay and personally justified knowledge.

Comment by Gordon Seidoh Worley (gworley) on The hostile telepaths problem · 2024-10-27T20:30:49.232Z · LW · GW

I'm sure my allegiance to these United States was not created just by reciting the Pledge thousands of times. In fact, I resented the Pledge for a lot of my life, especially once I learned more about its history.

But if I'm honest with myself, I do feel something like strong support for the ideals of the United States, much stronger than would make sense if someone had convinced me as an adult that its founding principals were a good idea. The United States isn't just my home. I yearn for it to be great, to embody its values, and to persist, even as I disagree with many of the details of how we're implementing the dream of the founders today.

Why do I think the Pledge mattered? It helped me get the feeling right. Once I had positive feelings about the US, of course I wanted to actually like the US. I latched onto the part of it that resonates with me: the founding principals. Someone else might be attracted to something else, or maybe would even find they don't like the United States, but stay loyal to it because they have to.

I'm also drawing on my experience with other fake-it-until-you-make-it rituals. For example, I and many people really have come to feel more grateful for the things we have in life by explicitly acknowledge that gratitude. At the start it's fake: you're just saying words. But eventually those words start to carry meaning, and before long it's not fake. You find the gratitude that was already inside you and learn how to express it.

In the opening example, I bet something similar could work for getting kids to appologize. No need to check if they are really sorry, just make them say sorry. Eventually the sadness at having caused harm will become real and flow into the expression of it. It's like a kind of reverse training, where you create handles for latent behaviors to crystalize around, and by creating the right conditions when the ritual is performed, you stand a better-than-chance possibility of getting the desired association.