Posts

Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons 2023-03-12T05:25:26.496Z
Causal vs Predictive Models, and the Causal Taboo 2021-10-18T15:05:10.457Z
Predictive Categories Make Bad Causal Variables 2021-10-18T15:02:58.099Z
When Arguing Definitions is Arguing Decisions 2021-07-25T16:45:40.575Z
What does flow feel like for "mental activities"? 2021-02-05T14:33:59.579Z
How to Absorb a Shared Success Script (while also thinking you're living without one) 2021-01-31T20:17:10.433Z
How webs of meaning grow and change 2020-08-14T13:58:38.422Z
Gödel's Legacy: A game without end 2020-06-28T18:50:43.536Z
Neural Basis for Global Workspace Theory 2020-06-22T04:19:15.947Z
Research snap-shot: question about Global Workspace Theory 2020-06-15T21:00:28.749Z
The "hard" problem of consciousness is the least interesting problem of consciousness 2020-06-08T19:00:57.962Z
Legibility: Notes on "Man as a Rationalist Animal" 2020-06-08T02:18:01.935Z
Consciousness as Metaphor: What Jaynes Has to Offer 2020-06-07T18:41:16.664Z
Finding a quote: "proof by contradiction is the closest math comes to irony" 2019-12-26T17:40:41.669Z
ToL: Methods and Success 2019-12-10T21:17:56.779Z
ToL: This ONE WEIRD Trick to make you a GENIUS at Topology! 2019-12-10T21:02:40.212Z
ToL: The Topological Connection 2019-12-10T20:29:06.998Z
ToL: Introduction 2019-12-10T20:19:06.029Z
ToL: Foundations 2019-12-10T20:15:09.099Z
Books on the zeitgeist of science during Lord Kelvin's time. 2019-12-09T00:17:30.207Z
The Actionable Version of "Keep Your Identity Small" 2019-12-06T01:34:36.844Z
Hard to find factors messing up experiments: Examples? 2019-11-15T17:46:03.762Z
Books/Literature on resolving technical disagreements? 2019-11-14T17:30:16.482Z
Paradoxical Advice Thread 2019-08-21T14:50:51.465Z
The Internet: Burning Questions 2019-08-01T14:46:17.164Z
How much time do you spend on twitter? 2019-08-01T12:41:33.289Z
What are the best and worst affordances of twitter as a technology and as a social ecosystem? 2019-08-01T12:38:17.455Z
Do you use twitter for intellectual engagement? Do you like it? 2019-08-01T12:35:57.359Z
How to Ignore Your Emotions (while also thinking you're awesome at emotions) 2019-07-31T13:34:16.506Z
Where is the Meaning? 2019-07-22T20:18:24.964Z
Prereq: Question Substitution 2019-07-18T17:35:56.411Z
Prereq: Cognitive Fusion 2019-07-17T19:04:35.180Z
Magic is Dead, Give me Attention 2019-07-10T20:15:24.990Z
Decisions are hard, words feel easier 2019-06-21T16:17:22.366Z
Splitting Concepts 2019-06-21T16:03:11.177Z
STRUCTURE: A Hazardous Guide to Words 2019-06-20T15:27:45.276Z
Defending points you don't care about 2019-06-19T20:40:05.152Z
Words Aren't Type Safe 2019-06-19T20:34:23.699Z
Arguing Definitions 2019-06-19T20:29:44.323Z
What is your personal experience with "having a meaningful life"? 2019-05-22T14:03:39.509Z
Models of Memory and Understanding 2019-05-07T17:39:58.314Z
Rationality: What's the point? 2019-02-03T16:34:33.457Z
STRUCTURE: Reality and rational best practice 2019-02-01T23:51:21.390Z
STRUCTURE: How the Social Affects your rationality 2019-02-01T23:35:43.511Z
STRUCTURE: A Crash Course in Your Brain 2019-02-01T23:17:23.872Z
Explore/Exploit for Conversations 2018-11-15T04:11:30.372Z
Starting Meditation 2018-10-24T15:09:06.019Z
Thoughts on tackling blindspots 2018-09-27T01:06:53.283Z
Can our universe contain a perfect simulation of itself? 2018-05-20T02:08:41.843Z
Reducing Agents: When abstractions break 2018-03-31T00:03:16.763Z

Comments

Comment by Hazard on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T17:29:53.214Z · LW · GW

I agree that disguising one's self as "someone who cares about X" doesn't require being good at X, at least when you only have short contained contact with them.

I'm trying to emphasize that I don't think Cade has made any progress in learning to "say the right things". I think he has probably learned more individual words that are more frequent in a rationalist context than not (like the word "priors"), but it seems really unlikely that he's gotten any better at even the grammar of rationalist communication.

Like, I'd be mediumly surprised if he, when talking to a rat, said something like "so what's your priors on XYZ?" I'd be incredibly surprised if he said something like "there's clearly a large inferential distance between your world model and the public's world model, so maybe you could help point me towards what you think the cruxes might be for my next article?"

That last sentence seems like a v clear example of something that both doesn't actually require understanding or caring about epistemology to utter, yet if I heard it I'd assume a certain orientation to epistemology and someone could falsely get me to "let my guard down". I don't think Cade can do things like that. And based on Zack's convo and Vassar's convo with him, and the amount of time and exposure he's had to learn between the two convos, I don't think that's the sort of thing he's capable of. 

Comment by Hazard on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T03:39:39.185Z · LW · GW

I might be misunderstanding, I understood the comment I was responding to as saying that Zack was helping Cade do a better job of disguising himself as someone who cared about good epistemics. Something like "if Zack keeps talking, Cade will learn to the surface level features of a good Convo about epistemology and thus, even if he still doesn't know shit, he'll be able to trick more people into thinking he's someone worth talking to."

In response to that claim, I shared an older interview of Cade to demonstrate that his been exposed to people who talk about epistemology for a while, and he did not do a convincing job of pretending to be in good faith then, and in this interview with Zack I don't think he's doing any better a job of seeming like he's acting in good faith.

And while there can still be plenty of reasons to not talk to journalists, or Cade in particular, I really don't think "you'll enable them to mimick us better" is remotely plausible.

Comment by Hazard on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T01:14:27.260Z · LW · GW

I can visibly see you training him, via verbal conversation, how to outperform the vast majority of journalists at talking about epistemics.

Metz doesn't seem any better at seeming like he cares about or thinks at all about epistemics than he did in 2021.

https://naturalhazard.xyz/vassar_metz_interview.html

Comment by Hazard on Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons · 2023-03-15T04:37:08.594Z · LW · GW

Symbiotic would be a mutually beneficial relationship. What I described is very clearly not that

Comment by Hazard on Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons · 2023-03-13T06:37:43.727Z · LW · GW

Yeah, the parasitic dynamic seems to set up the field for the scapegoating backup such that I'd expect to often find the scapegoating move in parasitic ecosystems that have been running their course for a while.

Comment by Hazard on Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons · 2023-03-13T06:28:37.015Z · LW · GW

Your comment seems like an expansion on who is the party being fooled and it also points out another purpose for the obfuscation. A defense of pre-truth would be a theory that shows how it's not deceptive and not a way to cover up a conflict. That being said I agree that an investor that plays pre-truth does want founders to lie, and it seems very plausible that they orient to their language game as a "figure it out" initiation ritual.

Comment by Hazard on Signalling & Simulacra · 2023-02-25T01:56:41.437Z · LW · GW

I'm with you on the deficiency of the signalling frame when talking about human communication and communication more generally. Skyrms and others who developed the signalling frame explicitly tried to avoid having a notion of of intentionality in order to explore questions like "how could the simplest things that still make sense to call 'communication' develop in systems that don't have human level intelligence?", which means the model has a gaping hole when trying to talk about what people do.

I wrote a post about the interplay between the intentional aspects of meaning and what you're calling the probabilistic information. It's doesn't get too into the weeds, but might provoke more ideas in you.

Comment by Hazard on Good books about overcoming coordination problems? · 2022-02-11T22:48:11.449Z · LW · GW

Not quite what you're looking for, but if you've got a default sense that coordination is hard, Jessica Taylor has a evocatively named post Coordination isn't hard.

Comment by Hazard on Open & Welcome Thread November 2021 · 2021-11-13T16:08:16.821Z · LW · GW

I remember at some point finding a giant messy graph that was all of The Sequences and the links between posts. I can't track down the link, anyone remember this and have a lead?

Comment by Hazard on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T16:12:03.549Z · LW · GW

When I was drafting my comment, the original version of the text you first quoted was, "Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about 'HEY DON'T USE THIS TO SCAPEGOAT' (which people are totally capable of ignoring)", guess I should have left that in there. I don't think it's uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.

I agree that putting a "I'm not trying to blame anyone" disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There's an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say "don't fucking scapegoat anyone, you fools" but all the associative and impressionistic "dark implications" (Vaniver's language) say "scapegoat CFAR/MIRI!" I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don't matter, and are listening in for "who should we blame?"

To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver's insistence on this being a game of "Scapegoat Vassar vs scapegoat CFAR/MIRI" totally sucked me in, and instead of reading the contents of anyone's comments I was just like "shit, who's side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I'm also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!" That mode of thinking I engaged in is a mode that can't really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena. 

This also seems to strong to me. I expect that many movement EAs will read the  post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."

Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)

I was thinking about the "in any way that matters" part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you've had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don't think that's true. I don't think that's the case either. I'm thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess's post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren't aligned with justice, and are working against it. Almost like an "anti-justice traumatic flashback" but most of the time it's much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of "falling into a dream" in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).

To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it's very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.

So when I said "not aligned with justice in any important relevant way", that was more a statement about "how often and when will people fall into these dreams?" Sorta like the concept of "fair weather friend", my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about "here's some problems I see in this institution that is at the core of our community" is exactly when it is most important for one's general atemporal commitment to justice to be present in one's actual thoughts and actions. 

Comment by Hazard on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T01:42:59.344Z · LW · GW

This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.

Comment by Hazard on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T21:06:33.468Z · LW · GW

I'm not sure what writing this comment felt like for you, but from my view it seems like you've noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I'm going to highlight a few things.

I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified. 

Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.

I totally agree with this. I also think that to the degree to which an "onlooker not paying much attention" concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of "looks", and Jessica's post certainly makes CFAR/MIRI "look" bad. This post can be used as "material" or "fuel" for scapegoating, regardless of whether Jessica's intent in writing it. Though it can't be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT", and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.

(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn't trying to scapegoat CFAR/MIRI. It also simply isn't in Jess's interests for them to be scapegoated)

Another thought: CFAR/MIRI already "look" crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that "look" crazy. And yet we're all able to talk about them on LW without worry about "how it looks" because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.

Something that we as a community don't talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don't collectively build and share models on their mechanics and structure. As such, I think it's expected that when "things get real" people abandon commitment to the truth in favor of "oh shit, there's an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost".

However, I think that we mostly shouldn't be in the business of trying to carter to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.

I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things "look okay" quickly  becomes a commitment to suppress information about what happened.

(aside, these are some of Ben's post that have been most useful to me for understanding some of this stuff)

Blame Games

Can Crimes Be Discussed Literally?

Judgement, Punishment, and Information-Supression Fields

Comment by Hazard on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T03:18:20.523Z · LW · GW

I found many things you shared useful. I also expect that because of your style/tone you'll get down voted :(

Comment by Hazard on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T13:28:41.688Z · LW · GW

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

This is very interesting to me! I'd like to hear more about how the two group's behavior looks diff, and also your thoughts on what's the difference that makes the difference, what are the pieces of "being brought up to go to college" that lead to one class of reactions?

Comment by Hazard on Predictive Categories Make Bad Causal Variables · 2021-10-18T21:08:15.492Z · LW · GW

It's not clear to me what if anything we disagree on.

I agree that personality categories are useful for predicting someone's behavior across time.

I don't think using essences to make predictions is the "wrong thing to do in general" either.

I agree climate can be a useful predictive category for thinking about a region. 

My point about taking the wrong thing as a causal variable "leading you to overestimate your ability to make precise causal interventions" is actually very relevant to Duncan's recent post. Many thought experiments are misleading/bogus/don't-do-what-they-say-on-label exactly because they posit impossible interventions.

Comment by Hazard on Hazard's Shortform Feed · 2021-09-26T17:30:24.678Z · LW · GW

"People are over sensitive to ostracism because human brains are hardwired to be sensitive to it, because in the ancestral environment it meant death."

Evopsyche seems mostly overkill for explaining why a particular person is strongly attached to social reality. 

People who did not care what their parents or school-teachers thought of them had a very hard time. "Socialization" as the process of the people around you integrating you (often forcefully) into the local social reality. Unless you meet a minimum bar of socialization, it's very common to be shunted through systems that treat you worse and worse. Awareness of this, and the lasting imprint of coercive methods used to integrate one into social reality, seem like they can explain most of an individuals resistance to breaking from it.

Comment by Hazard on How feeling more secure feels different than I expected · 2021-09-17T12:06:03.496Z · LW · GW

I greatly appreciate posts that describe when different flavors of self work (or different kinds of problems) don't feel like how one expected. A somewhat reversed example for me, for some years I didn't notice the intense judgement I had within me that would occasionally point at others and myself, largely because I had a particular stereotype of what "being judgemental" looked like. I correctly determined I didn't do the stereotypically judgemental thing, and stopped hunting.

Comment by Hazard on When Arguing Definitions is Arguing Decisions · 2021-07-27T03:18:39.967Z · LW · GW

I agree that meeting a person where they are is pretty important. You also seem to spend time with very different people than who I spend time with, and you have a very different reference for "people" and "where they are". This post probably isn't going to be too useful to the people you reference in your hypotheticals. It has been very useful for various people I know, so I'm meeting them where they are :)

You mention that it's useful to have conversations where you try to get on the same page about what you mean when you use certain words (3rd to last paragraph of your comment). I think that's frequently super important and often useful to do. I'm assuming you're mentioning it because you see my post as saying this doesn't matter and shouldn't be done. If you can point out what part seemed to be arguing that, I can see if I agree that my wording was ambiguous and/or poorly phrased. Currently I still don't think the content of my post argues or implies or sets the philosophical underpinnings for the claims you say it does. So we probably won't get out of this unless we dive into specifics.

Comment by Hazard on When Arguing Definitions is Arguing Decisions · 2021-07-26T12:20:18.022Z · LW · GW

As a shortcut, if you have similar criticisms of A Human's Guide to Words, then we probably do disagree a lot. But if you don't think EY "thinks words aren't useful" then we just have a misunderstanding.

Comment by Hazard on When Arguing Definitions is Arguing Decisions · 2021-07-26T12:12:43.501Z · LW · GW

This is awkward because I'm pretty sure I don't believe anything your reply asserts I believe.

To clarify, is it the case that from reading my post you've concluded that I don't think labels/words are useful and that I don't think we need language for complex thought? If that's the case, can you help me understand how you got that?

Some thoughts: the "When" in the title was meant to make this distinct from simply "Arguing Definitions Is Arguing Decisions". Of all arguments about definitions, some unknown about have the qualities I'm pointing at.

When you mention that I promptly forget that words/labels are useful, do you think I said things that contradicted the idea of words being useful, or did that fact that I didn't keep circling back to "words are great" make you infer I don't care about them? Mayhaps I find the idea of thinking people shouldn't use language as so ridiculous that I didn't feel a need to hedge against that interpretation, but you run into these sorts of people often and have high priors for that interpretation? 

Comment by Hazard on When Arguing Definitions is Arguing Decisions · 2021-07-26T11:55:10.733Z · LW · GW

I'll file a complaint to this imaginary workplace.

I'm short on actual conversations I can remember the details of, so if you have any that you think make a good example, feel free to share. Examples are some of the most important parts and I don't like it whenever I have to make them up.

Comment by Hazard on Hazard's Shortform Feed · 2021-07-25T14:20:40.253Z · LW · GW

I'm reflecting back on this sequence I started two years ago. There's some good stuff in it. I recently made a comic strip that has more of my up to date thoughts on language here. Who knows, maybe I'll come back and synthesize things.

Comment by Hazard on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2021-07-04T17:57:35.113Z · LW · GW

Agreed that there's something missing. I didn't provide much of a model about what emotions are, mostly because I didn't have much of one when I wrote this. It was also the case that for some time I used my lack of a mechanistic model of emotions as an excuse to ignore the ways I was obviously hurting.

In response to Raemon's comment here, I and a few others gave some more concrete thoughts on what negative repercussions are.

I intend to write some follow up posts with what I've learned in the intervening years. One thing I need to expand on is what I actually did with "fix it or stop complaining", because if I take your comment at face value, we were clearly not doing the same thing, yet we both felt it sensical to call what we did "fix it or stop complaining".

Another thought, these days I'm thinking a bit more in terms of "disavowed desires" instead of "repressed emotions". Desires (or subagents) feel like the mental things that generate loops across time, that make things come up again and again. Emotions are the transient expressions of these desires. Emotions actually can "just go away" if you ignore them, but I haven't found that to be the case for desires (I'm thinking less "I desire to have some lunch" and more "I desire to be accepted by others". Well, it's less "can I get this to go away rn?" (which you can almost always do with [drugs/video games/media/activity/etc]) and more "will this pop back up?").

This post of mine includes the exposition of one disavowed desire I've struggled with which generated a lot of emotions over the years which I ignored. The header "A Serious Pardox" describes the disavowed desire. Knots by R.D Laing describes in poetic language a lot of these emotional paradoxes.

All that being said, I've spent the last yearish more in a mode of understanding and building agency. This has felt possible because I feel I've unraveled enough emotional paradoxes that I'll know if/when I'm doing something that hurts me (agency isn't safe when you're not aligned). I've got a few threads about the process of building agency with an eye on not backsliding on emotional stuff, and another post which frames a lot of this journey.

Comment by Hazard on Looking Deeper at Deconfusion · 2021-06-15T16:45:22.952Z · LW · GW

Great post! Would also be interested in reading your distributed systems papers.

Comment by Hazard on Academia as Company Hierarchy · 2021-05-12T20:28:59.765Z · LW · GW

Rao made his framework by combining his consulting experience with the TV show The Office. I don't believe he was trying to describe all corporations, which leaves me with the question "How would I determine which workplaces have these dynamics?"

The world he describes doesn't seem incompatible with the corporate world that the book Moral Mazes depicts.

I've not been in the working world long enough to have any data on what's common or normal, and haven't been at my current workplace long enough to have a sense for if it matches Rao's frame (it doesn't seem like it does).

You also don't think your work place fits the bill. Have you interacted with any work places that seemed to match up? How many work places have you interacted with enough to feel confident making the judgement either way? I'm very interested to get more data points.

Comment by Hazard on Academia as Company Hierarchy · 2021-05-12T20:22:30.018Z · LW · GW

From reading lots of Rao's stuff, I also got the sense that he's writing descriptively, and specifically, he's trying to describe The Office. It'll be truthful to the degree that The Office captures some truths, and to the degree that Rao's own consulting experience fills in the details.

Comment by Hazard on My Journey to the Dark Side · 2021-05-07T00:31:10.612Z · LW · GW

I appreciate you writing this! Describing how exactly a set of ideas fucked with you, how the ideas interlock, and what you think their structure is, is something I'm always glad to see.

Comment by Hazard on Wanting to Succeed on Every Metric Presented · 2021-04-15T02:47:06.073Z · LW · GW

Sometimes when I'm writing an email to someone at work, I noticing I'm making various faces, as if to convey the emotion in the sentence I'm writing. It's like... I'm composing a sentence, I'm imagining what I'm trying to express, and I'm imagining that expression, and along with that comes the physical faces and mental stances of the thing I'm expressing. It's like I'm trying to fill in and inhabit some imagined state.

Over the past year I've noticed a similar sort of feeling when I'm thinking about something I could potentially do, and I'm being motivated by appearing impressive. The idea/thought is there, and then I try to "fill it up" and momentarily live into that world. There's normally a slight tension in my forehead that starts to form. There's also a sort of "zooming in" feeling in my head. It likely sounds drastic me typing it out, but this is all pretty subtle and I didn't notice it for a while.

Anywho, mostly if I find myself pleasurably stewing in the imagined state of the thing, it's a sign for me that it's about impressiveness. I seem to not sit in the idea when there's other motivations? I can't think of any reason why that would be the case, but it seems to be for me.

Comment by Hazard on Against "Context-Free Integrity" · 2021-04-15T02:29:28.860Z · LW · GW

Dope, it was nice to check and see that contrary to what I expect, it's not always being used that way :)

Some idle musings on using naive to convey specific content.

Sometimes I might want to communicate that I think someone's wrong, and I also think they're wrong in a way that's only likely to happen if they lack experience X. Or similar, they are wrong because they haven't had experience X. That's something I can imagine being relevant and something I'd want to communicate. Though I'd specifically want to mention the experience that I think they're lacking. Otherwise it feels like I'm asserting "there just is this thing that is being generally privy to how things work" and you can be privy or not, which feels like it would pull me away from looking at specific things and understanding how they work, and instead towards trying to "figure out the secret". (This is less relevant to your post, because you are actually talking about things one can do)

There's another thing which is in between what I just mentioned, and "naive" as a pure intentional put-down. It's something like "You are wrong, you are wrong because you haven't had experience X, and everyone who has had experience X is able to tell that you are wrong and haven't had experience X." The extra piece here is the assertion that "there are many people who know you are wrong". Maybe those many people are "us", maybe not. I'm having a much harder time thinking of an example where that's something that's useful to communicate, and is too close asserting group pressure for my liking.

Comment by Hazard on Against "Context-Free Integrity" · 2021-04-14T14:03:18.494Z · LW · GW

I generally agree with this post.

And man, that feels kinda naive to me.

Is there something you wanted to communicate here that was more than "that feels wrong/not true"? All usage and explications of "naive" that I've encountered seemed to focus on "the thing here that is bad or shameful is that we experienced people know this and you don't, get with the program".

Comment by Hazard on Wanting to Succeed on Every Metric Presented · 2021-04-13T02:27:38.782Z · LW · GW

I liked that you provided a lot of examples!

If the details are available within you, I'd love to hear more about what the experience of noticing these fake values was like. Say for getting A's, I'd hazard a guess that at some point pre-this-revelation you did something like "thinking about why A's matter". What was that like? What was different about that reflection from more recent reflection? Has it been mostly a matter of learning to pay attention and then it's all easy, or have you had to learn what different sorts of motivation/fake-real values feel like, or other?

Does it feel like there were any "pre-requisites" for being able to notice the difference?

Comment by Hazard on Hazard's Shortform Feed · 2021-03-07T23:42:06.559Z · LW · GW

Previously when I'd encountered the distinction between synthetic and analytic thought (as philosophers used them), I didn't quite get it. Yesterday I started reading Kant's Prolegomena and have a new appreciation for the idea. I used to imagine that "doing the analytic method" meant looking at definitions. 

I didn't imagine the idea actually being applied to concepts in one's head. I imagined the process being applied to a word. And it seemed clear to me that you're never going to gain much insight or wisdom from investigation a words definition and going to a dictionary. 

But the process of looking at some existing concept you have in your mind, that you already use and think with, and peeling it apart to see what you're actually doing, that's totally useful!

Comment by Hazard on 02/28/2021 - Myanmar Diaries; Context · 2021-02-28T21:01:17.900Z · LW · GW

I think this diary is a good idea, interested to see how it goes!

Comment by Hazard on How to Absorb a Shared Success Script (while also thinking you're living without one) · 2021-02-05T14:38:54.744Z · LW · GW

That certainly would have made a cool diary :)

I totally agree that the dude's critique didn't have much substance. That example, and several others, are all things were now I can see and feel the lack of substance. It was very real then though. In writing this I tried to emphasize that aspect, the way there wasn't much putting things in context, the way that by strategy for dealing with people made it very hard to go "k, jelly person critiquing with no substance".

Comment by Hazard on How to Absorb a Shared Success Script (while also thinking you're living without one) · 2021-02-02T19:51:01.594Z · LW · GW

Glad a part related! 

Yeah, the particular self-narrative one has probably does a lot of the shaping of everything else. The messages from others that I attend to would be a bit different from you.

Comment by Hazard on Open & Welcome Thread - January 2021 · 2021-02-02T13:58:13.751Z · LW · GW

Nvm I found it! It was about types of philosophers, it was a comment, and it's this one by gjm.

Comment by Hazard on Open & Welcome Thread - January 2021 · 2021-02-02T13:56:28.382Z · LW · GW

I'm trying to find a post (maybe a comment?) from the past few years. The idea was, say you have 8 descriptive labels. These labels could correspond to clusters in thing-space. Or they could correspond to axis. I think it was about types of mathematicians. 

Comment by Hazard on How to Absorb a Shared Success Script (while also thinking you're living without one) · 2021-02-01T12:47:11.861Z · LW · GW

I'm trying to remember the book's take, something like "humor is to suss out what norms you do and don't approve of"?

Comment by Hazard on crl826's Shortform · 2021-01-10T19:54:09.497Z · LW · GW

Rao offhandedly mentions that the Clueless are useful to put blame on when there's a "reorg". That didn't mean much to me until I read the first few chapters of Moral Mazes, where it went through several detailed examples of the politics of a reorg.

Comment by Hazard on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2021-01-09T15:02:30.686Z · LW · GW

I'm the author, writing a review/reflection.

I wrote this post mainly to express myself and make more real my understanding of my own situation. The summer of 2019 I was doing a lot of exploration on how I felt and experience the world, and also I was doing lots of detective work trying to understand "how I got to now."

The most valuable thing it adds is a detailed example of what it feels like to mishandle advice about emotions from the inside. This was prompted by the fact that younger me "already knew" about dealing with his emotions, and I wanted to write a post the plausibly would have helped him. 

I think this sort of data is incredibly important. Understanding the actual details of your mind that prevented you from taking advantage of "good advice". I want more of people sharing "here's the particular way I got this wrong for a long time" more so than "something other people get wrong is blah". This feels like the difference between "What? I guess you weren't paying attention when you read the sequences" and "Ah, your mind is in a way where you will reliably get this one important aspect of the sequences wrong, let's explore this."

I still reference this post a lot, to friends and in my own thinking. It's no longer the focal point of any of my self work, but it's a foundational piece of self-knowledge.

"Does this post make accurate claims" is the fun part :) I tried my hardest to make this 100% "here's a thing that happened to me" because I'm an expert on my own history. But real quick I'll try to pull out the external claims and give them a spot check:

  • Everyone could learn to wiggle their ears
    • Not exactly a booming field of research, but this had the little research I could find. I think I'd put 80% or something on this being true.
  • Certain mental/emotional skills that you haven't practiced you're whole life have the same "flailing around in the dark" aspect as learning to wiggle your ears
    • "Flailing around in the dark" is defs a possible human experience. Maybe a better example would be bling people seeing through sensors on their tongue. It takes time to learn how to use such a device.
    • I'd expect most people to agree with me that as a developing infant, learning to actuate your body and mind involved a lot of time "flailing around in the dark". Though I imagine one could also say "yeah, but after you grow up that's not a problem any more. There aren't parts of my body that I'm mysteriously unable to move but have the potential to." Wiggling ears was supposed to be an example of such a part, but I still want to address this. Why wouldn't you have learned how to actuate all the parts of your mind? My answer is longer and I'm going to punt it to another comment.
  • The parent child model, and parts-work in general
    • Kaj's amazing sequence is where you should look for exploring the literal truth of these sorts of models.
    • pjeby and kaj had a great comment discussion about when and where parts models help or get in the way of self-work. The central paradox of parts work is that even if you sensibly identify conflicting parts of yourself, it's still all you. It always has been. Mostly in accord with what pjeby says, I did in fact find the parent child model very useful specifically because the level of self-judgment I had made it really hard to not attack myself for having these wants and needs, but when I frame things has a group I can tap into all the intuitions I've built over the years about how of course you need to listen to people and not beat them into silence.
      • In summary, parts models can have the effect of putting distance between you and desires and needs that you have. It is possible that you are currently self-judgemental enough that you won't be able to make much progress unless you find a way to distance these desires, at least long enough for your judgement to shut up, and possibly allow you to figure out how to deal with the judgement.

Right, onto follow up.

In a comment, raemon said he'd appreciate an exploration of "what bad stuff actually happens if you ignore your emotions in this or a similar way?" There are 3 great response sharing snippets of diff people's experience. I think the most compelling extension I could add would be exploring more how "ignoring emotions" and "ignoring my ability to want" blend together, and how these processes combined to, for a long time, make it really hard for me to tell if something actually felt good, if I liked it or was interested in it, and as a corollary this made it easier for me to chase after substitutes (I can't tell if I like this, but it's impressive and everyone will reward me for it, but I also am not aware that I can't tell if I like it, so I now do this thing and think I like it, even though my motivation/energy for it will not survive outside the realm of social reinforcement). I'm currently writing a post that explores some of those dynamics! I could certainly add a paragraph or two to this post.

In some comment Lisa Feldman's work on emotions was mentioned. This also highlights how I don't really look at what emotions are in this post. I've since built a waaaay more detailed model of emotions, how to think about mind-body connection, how this relates to trauma, and how it all connects to clear thinking / not being miserable. Again, this would be a whole other post, possibly many.

Another follow up on how I relate to parts models. I think in parts way less often these days. Pretty sure this is a direct result of having defused a decent amount of judgement. But I can also see a lot of that judgement flare up again when I'm in social situations. So I'm generally able to, when by myself (which is often), feel safe accepting all of me, but I generally don't feel safe doing that around other people.

A few people have told me that they really wanted a section on "and here's what healthy emotional processing looks like", but I don't think I'm going to add one, because I can't. I think the most valuable stuff I can write is "here's a really detailed example of how it happened to me... that's all." And while I have grown better at processing and listening to emotions, I've yet to gain the distance to figure what I've been doing is was most essential for me, and what the overall arc/shape of my progress looks like. Plus, this would be a whole nother giant post, not an addition. 

Comment by Hazard on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2021-01-09T14:45:34.397Z · LW · GW

I'm pondering this again. I expect, though I have not double checked, that the studied cases of pressure to find repressed memories leading to fake memories are mostly ones that involve, well, another person pressuring you. How often does this happen if you sit alone in your room and try it? Skilled assistant would almost certainly be better than an unskilled assistant, though I don't know how it compares to DIY, if you add the complication of "can you tell if someone is skilled or not?"

Would be interested if anyone's got info about DIY investigations. 

Comment by Hazard on Eli's shortform feed · 2021-01-04T02:29:10.914Z · LW · GW

I plan to blog more about how I understand some of these trigger states and how it relates to trauma. I do think there's a decent amount of written work, not sure how "canonical", but I've read some great stuff that from sources I'm surprised I haven't heard more hype about. The most useful stuff I've read so far is the first three chapters of this book. It has hugely sharpened my thinking.

I agree that a lot of trauma discourse on our chunk of twitter is more for used on the personal experience/transformation side, and doesn't let itself well to bigger Theory of Change type scheming.

http://www.traumaandnonviolence.com/chapter1.html

Comment by Hazard on Hazard's Shortform Feed · 2020-12-16T15:31:40.685Z · LW · GW

The way I see "Politics is the Mind Killer" get used, it feels like the natural extension is "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own Is The Mind Killer".

From this angle, a commitment to prevent things from getting "too political" to "avoid everyone becoming angry idiots" is also a commitment to not having an impact.

I really like how jessica re-frames things in this comment. The whole comment is interesting, here's a snippet:

Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then "politics is the mind-killer" is the wrong framing. Rather, "politics is a domain where people often try to kill each other's minds" is closer.

With would further transform my new no longer catchy phrase to "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own will result in people trying to kill each other's minds."

Which has very different repercussions from the original saying.

Comment by Hazard on What confusions do people have about simulacrum levels? · 2020-12-15T01:15:00.111Z · LW · GW

Your linked comment was very useful. To those who didn't click, here's a relevant snippet:

It seems like Simulacrum Levels were aiming to explore two related concepts:

  • How people's models/interactions diverge over time from an original concept (where that concept is gradually replaced by exaggerations, lies, and social games, which eventually bear little or not referent to the original)
  • How people relate to object level truth, as a whole, vs social reality

The first concept makes sense to call "simulacrum", and the second one I think ends up making more sense to classify in the 2x2 grid that I and Daniel Kokotajilo both suggested (and probably doesn't make sense to refer to as 'simulacrum')

Comment by Hazard on Hazard's Shortform Feed · 2020-12-14T23:53:36.209Z · LW · GW

I started writing on LW in 2017, 64 posts ago. I've changed a lot since then, and my writing's gotten a lot better, and writing is becoming closer and closer to something I do. Because of [long detailed personal reasons I'm gonna write about at some point] I don't feel at home here, but I have a lot of warm feelings towards LW being a place where I've done a lot of growing :)

Comment by Hazard on Cultural accumulation · 2020-12-06T16:10:55.954Z · LW · GW

This makes me wonder, for every experiment that's had a result of "X amount of people can't do Y task", how would that translate to "Z amount of people can/can't do Y task when we paid them to take 2 days/ a week off of work and focus soley on it".

Hard to test for obvious reasons.

Comment by Hazard on Cultural accumulation · 2020-12-06T16:07:31.702Z · LW · GW

The article cited is also wrong about the line counts for some of the other groups it mentions, google doesn't have 2000 billion lines, according to their own metrics.

Comment by Hazard on Postmortem on my Comment Challenge · 2020-12-05T00:38:57.030Z · LW · GW

Love that you did this and learned something about some of the reasons discussions don't actually get started. I notice that I have often don't comment in a discussion conducing way because I don't enjoy trying to discuss with the time lag normally involved in lw comments. On twitter, I'm very quick to start convos, especially ones that are more speculative. That's partially because if we quickly strike a dead end (it was a bad question, I assumed something incorrect) it feels like no big deal. I'd be more frustrated having a garden path convo like that in LW comments.

Comment by Hazard on Building up to an Internal Family Systems model · 2020-12-04T22:37:00.313Z · LW · GW

Really what I want is for Kaj's entire sequence to be made into a book. Barring that, I'll settle for nominating this post. 

Comment by Hazard on Hazard's Shortform Feed · 2020-12-04T00:08:08.340Z · LW · GW

To everyone on the LW team, I'm so glad we do the year in review stuff! Looking over the table of contents for the 2018 book I'm like "damn, a whole list of bangers", and even looking at top karma for 2019 has a similar effect. Thanks for doing something that brings attention to previous good work.