Hazard's Shortform Feed

post by Hazard · 2018-02-04T14:50:42.647Z · LW · GW · 231 comments

Contents

231 comments

In light of reading through Raemon's shortform feed, I'm making my own. Here will be smaller ideas that are on my mind.

231 comments

Comments sorted by top scores.

comment by Hazard · 2020-08-14T14:24:38.940Z · LW(p) · GW(p)

HOLY shit! I just checked out the new concepts portion of the site that shows you all the tags. This feels like a HUGE step in the direction the LW team's vision of a place where knowledge production can actually happen. 

Replies from: Raemon
comment by Raemon · 2020-08-14T17:56:16.420Z · LW(p) · GW(p)

Woo! Glad that had the intended effect. :)

comment by Hazard · 2021-09-26T17:30:24.678Z · LW(p) · GW(p)

"People are over sensitive to ostracism because human brains are hardwired to be sensitive to it, because in the ancestral environment it meant death."

Evopsyche seems mostly overkill for explaining why a particular person is strongly attached to social reality. 

People who did not care what their parents or school-teachers thought of them had a very hard time. "Socialization" as the process of the people around you integrating you (often forcefully) into the local social reality. Unless you meet a minimum bar of socialization, it's very common to be shunted through systems that treat you worse and worse. Awareness of this, and the lasting imprint of coercive methods used to integrate one into social reality, seem like they can explain most of an individuals resistance to breaking from it.

comment by Hazard · 2019-07-30T20:22:41.216Z · LW(p) · GW(p)

I've recently re-read Lou Keep's Uruk series, and a lot more ideas have clicked together. I'm going to briefly summarize each post (hopefully will tie things together if you have read them, might not make sense if you haven't). This is also a mini-experiment in using comments to make a twitter-esque idea thread.

Replies from: Hazard, Hazard, Hazard, Hazard, None
comment by Hazard · 2019-07-30T21:07:07.159Z · LW(p) · GW(p)

#4 Without belief in a god, never without belief in the devil

This post tracks ideas in The True Believer, by Hoeffer.

There is a fundamental difference between the appeal of a mass movement and the appeal of a practical organization. The practical organization offers opportunities for self-advancement, and its appeal is mainly to self-interest. On the other hand, a mass movement, particularly in its active, revivalist phase, appeals not to those intent on bolstering and advancing a cherished self, but to those who crave to be rid of an unwanted self. A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation.

The main MO of a MM is to replace action with identity. This is the general phenomena of which narcissism (TLP and samzdat brand) is a specific form of.

Moloch like forces conspire such that the most successful MM will be the one's that do the best job of keeping their members very frustrated. Hate is often used to keep fire burning.


comment by Hazard · 2019-07-30T20:56:11.502Z · LW(p) · GW(p)

#3 Use and Abuse of Witchdoctors

One sentence: Metis is the belief, the ritual, and the world view, and they are way less separable than you think.

Explores the recent history of gri-gri, witch doctor magic used in Africa to make people invulnerable to bullets to fight against local warlords (it also can involve some nasty sacrifice and cannibalism rituals). Lou emphasizes the point that it's not enough to go "ah, gri-gri is a useful lie that helps motivates everyone to fight as a unified force, and fighting as a unified force is what actually has a huge impact on fighting of warlords..."

The State's response is likely going to be "Ahhh, so gri-gri doesn't do anything, let's ban it and just tell people to fight in groups". This will fail, because this has no theory of individual adoption (i.e, the only reason people fought as one was because they literally thought they were invulnerable).

This is all to hammer in the point that for any given piece of illegible metis, it's very hard to find a actual working replacement, and Very hard (possibly beyond the states pay grade) to find a legible replacement.

comment by Hazard · 2019-07-30T20:46:43.811Z · LW(p) · GW(p)

#2 The Meridian of Her Greatness:

One sentence: People care about the social aspects of life, and the social is now embedded in market structures in a way that allows Moloch-esque forces to destroy the good social stuff.

It starts my addressing the "weirdness" of everyone being angry, even though people are richer than ever. This post tracks the book The Great Transformation by Polanyi.

Claim: (quote from Polanyi)

He [man] does not act so as to safeguard his individual interest in the possession of material goods; he acts so as to safeguard his social standing, his social claims, his social assets. He values material goods only insofar as they serve this end.

Capitalism is differentiated from markets. Reason being is that markets have always been around (they were mediated and controlled through social relationships), the new/recent thing is building society around a market.

Claim: Once you treat labor and land like common market goods and subject them to the flows of the market, you open up a pathway for Moloch to gnaw away at your soul. Now "incentives" can apply pressure such that you slowly sacrifice more and more of the social/relational aspects of life that people actually care about.

comment by Hazard · 2019-07-30T20:34:33.044Z · LW(p) · GW(p)

#1 Man as a rational animal

The concept of legibility is introduced (I like Ribbon Farm's explanation of the concept). The state only talks in terms of legibility, and thus can't understand and illegible claims, ideas, practices. The powerless (i.e the illegible who can't speak in the terms of the state) tend to get crushed. (now adays an illegible group would be Christians)

Lou points to the current process/trajectory of the state slowly legibilizing the world, and destroying all that is illegible in its path. Besides noting this process, Lou also claims that some of those illegible practices are valuable, and because the state does not truly understand the illegible practices it destroys, the state does not provide adequate replacements.

Extra claim: a lot of the illegible metis being destroyed has to do with happiness, fulfillment, and other essential components of human experience.


comment by [deleted] · 2019-08-13T20:38:19.270Z · LW(p) · GW(p)

I really like that you're doing this! I've tried to get into the series, but I haven't done so in a while. Thanks for the summaries!


(Also, maybe it'd be good for future comments about what you're doing to be children of this post, so it doesn't break the flow of summaries.)

comment by Hazard · 2018-03-01T15:13:12.621Z · LW(p) · GW(p)

Over the past few months I've noticed a very consistent cycle.

  1. Notice something fishy about my models
  2. Struggle and strain until I was able to formulate the extra variable/handle needed to develop the model
  3. Re-read an old post from the sequences and realize "Oh shit, Eliezer wrote a very lucid description of literally this exact same thing."

What's surprising is how much I'm surprised by how much this happens.

Replies from: Hazard, Hazard
comment by Hazard · 2019-02-22T19:56:26.056Z · LW(p) · GW(p)

Often I have an idea three times in various forms before it makes it to territory of, "Well thought out idea that I'm actually acting upon and having good stuff come from it."

My default, I follow a pattern of, "semi-randomly expose myself to lots of ideas, not worry a lot about screening for repetitive stuff, let the most salient ideas at any given moment float up to receive tid-bits of conscious thought, then forget about them till the next semi-random event triggers it being thought about."

I'd be interested if there was a better protocol for, "This thing I've encountered seems extra important/interesting, let me dwell on it more and more intentionally integrate it into my thinking/"

comment by Hazard · 2018-11-01T15:36:35.055Z · LW(p) · GW(p)

Ahh, the "meta-thoughts [LW(p) · GW(p)]" idea in seems like a useful thing to apply if/when this happens again.

(which begs the questions, when I wrote the above comment, why didn't I have the meta-thought that I did in the linked comment? (I don't feel up to thinking about that in this moment)) *tk*

comment by Hazard · 2018-06-15T01:47:56.508Z · LW(p) · GW(p)

Here's a pattern I'm noticing more and more: Gark makes a claim. Tlof doesn't have any particular contradictory beliefs, but takes up argument with Gark, because (and this is the actual-source-of-behavior because) the claim pattern matches, "Someone trying to lay claim to a tool to wield against me", and people often try to get claims "approved" to be used against the other.

Tlof behavior is a useful adaptation to a combative conversational environment, and has been normalized to feel like a "simple disagreement". Even in high trust scenarios, Tlof by habit continues to follow conversational behaviors that get in the way of good truth seeking.

Replies from: Hazard
comment by Hazard · 2018-06-20T23:25:05.061Z · LW(p) · GW(p)

A bit more generalized: there are various type of "gotcha!"s that people can pull in conversation, and it is possible to habituate various "gotcha!" defenses. These behaviors can detract from conversations where no one is pulling a "gotcha!".

comment by Hazard · 2019-10-23T23:01:58.724Z · LW(p) · GW(p)

Sketch of a post I'm writing:

"Keep your identity small" by Paul Graham $$\cong$$ "People get stupid/unreasonable about an issue when it becomes part of their identity. Don't put things into your identity."

"Do Something vs Be Someone" John Boyd distinction.

I'm going to think about this in terms of "What is one's main strategy to meet XYZ needs?" I claim that "This person got unreasonable because their identity was under attack" is more a situation of "This person is panicing at the possibility that their main strategy to meet XYZ need will fail."

Me growing up: I made effort to not specifically "identify" with any group or ideal. Also, my main strategy for meeting social needs was "Be so casually impressive that everyone wants to be my friend." I can't remember an instance of this, but I bet I would have looked like "My identity was under attack" if someone starting saying something that undermined that strategy of mine. Being called boring probably would have been terrifying.

"Keep your identity small" is not actionable advice. The target should be more "Build multi-faceted confidence in yourself overtime, thus allowing you to never feel like one strategy failing is your doom."

Another way identity is a slightly unhelpful frame: If you claim identities are passive, inactive, "being" things, you are ignoring identities like, "I'm part of this sub culture that actually DOES stuff" or "I am decisive and get things done quickly". Some identities can involve more vacuous signalling than others.

Also, something about identity as a blue print "I'll try to be like this type of person, because they seem to succeed" that is very lossy and prone to Goodharting. Seems similar to the difference between asking "Is it rational to believe the sky is blue?" vs "Is the sky blue?"

Replies from: Hazard
comment by Hazard · 2019-12-04T14:20:36.281Z · LW(p) · GW(p)

Yesterday I read the first 5 articles on google for "why arguments are useless". It seems pretty in the zeitgeist that "when people have their identity challenged you can't argue with them. A few of them stopped there and basically declared communication to be impossible if identity is involved, a few of them sequitously hinted at learning to listen and find common ground. A reason I want to get this post out is to add to the "Here's why identity doesn't have to be a stop sign [LW · GW]."

comment by Hazard · 2019-08-13T20:03:29.056Z · LW(p) · GW(p)

Lol, one reason it's hard to talk to people about something I'm working through when there's a large inferential gap, is that when they misunderstand me and tell me what I think I sometimes believe them.

Replies from: Hazard
comment by Hazard · 2019-08-13T20:10:08.151Z · LW(p) · GW(p)

Example Me: "I'm thinking about possible alternatives typical ad revenue models of funding content creation and what it would take to switch, like what would it take to get eeeeeeveryone on patreon? Maybe we could eliminate some of the winner takes all popularity effects of selling eyeballs."

Friend: somewhat indignantly "You're missing the point. Why would you think this could solve popularity contest? Patreon just shifts where that contest happens."

Me: fumbles around trying to explain why I think patreon is a good idea, even though I DONT, and explicitly started the convo with I'm exploring possibilities, but because my thoughts aren't yet super clear I'm super into supporting something the other person thinks I think

Replies from: Dagon
comment by Dagon · 2019-08-13T21:43:55.810Z · LW(p) · GW(p)

This happens on LW as well, fairly often. It's hard to really introduce a topic in a way that people BELIEVE you when you say you're exploring concept space and looking for ideas related to this, rather than trying to evaluate this actual statement. It's still worth trying to get that across when you can.

It's also important to know your audience/discussion partners. For many people, it's entirely predictable that when you say "I'm thinking about ... get everyone on patreon" they will react to the idea of getting their representation of "everyone" on their ideas of "patreon". In fact, I don't know what else you could possibly get.

It may be better to try to frame your uncertainty about the problem, and explore that for awhile, before you consider solutions, especially solutions to possibly-related-but-different problems. WHY are you thinking about funding and revenue? Do you need money? Do you want to give money to someone? Do you want some person C to create more content and you think person D will fund them? It's worth it to explore where Patreon succeeds and fails at whatever goals you have, but first you have to identify the goals.

Replies from: Hazard
comment by Hazard · 2019-08-14T01:24:17.512Z · LW(p) · GW(p)

Separating two different points in my example, there's "You misunderstanding my point leads me to misunderstand my point" (the thing I think is the most interesting part) and there's also "blarg! Stop misunderstanding me!"

I'm with you on your suggestion of framing a discussion as uncertainty about a problem, to get less of the misunderstanding.

comment by Hazard · 2019-08-04T20:35:19.195Z · LW(p) · GW(p)

I finished reading Crazy Rich Asians which I highly enjoyed. Some thoughts:

The characters in this story are crazy status obsessed, and my guess is because status games were the only real games that had ever existed in their lives. Anything they ever wanted, they could just buy, but you can't pay other rich people to think you are impressive. So all of their energy goes into doing things that will make others think their awesome/fashionable/wealthy/classy/etc. The degree to which the book plays this line is obscene.

Though you're never given exact numbers on Nick's family fortune, the book builds up an aura of impenetrable wealth. There is no way you will ever become as rich as the Youngs. I've already been someone who's been a grumpy curmudgeon about showing off/signalling/buying positional goods, but a thing that this book made real to me was just how deep these games can go.

If you pick the most straight forward status markers (money), you've decided to try and climb a status ladder of impossible height with vicious competition. If you're going to pick a domain in which you care more about your ordinality than your carnality, for the love of god choose carefully.

This reminds me of something an old fencing coach told me:

Fencing is a small enough sport that if you just train really diligently, you could make it to the Olympics. If you want to be the best in football, you have to train really diligently, be a genetic freak, and be lucky.

Whether or not that is/was true, it's an important thing to keep in mind. Also, I think I want to pay extra attention to "Do I actually think that XYZ is cardinally cool, or is it just the most impressive thing anyone is doing in my sphere of awareness?" Implication being that if it's the latter, expanding my sphere will lead to me not fealling good about doing XYZ.

comment by Hazard · 2019-07-22T20:57:02.832Z · LW(p) · GW(p)

Thoughts on writing (I've been spending the 4 hours every morning the last week working on Hazardous Guide to Words [? · GW]):

Feedback

Feedback is about figuring out stuff you didn't already know. I wrote the first draft of HGTW a month ago, and I wrote it in "Short sentences that convince me personally that I have a coherent idea here". When I went to get feedback from some friends last week, I'd forgotten that I'd hadn't actually worked to make it understandable, and so most of the feedback was "this isn't understandable".

Writing with purpose

Almost always if I get bogged down when writing it's because I'm trying to "do something justice" instead of "do what I want". "Where is the meaning [? · GW]?" started as "oh, I'll just paraphrase Hofstadter's view of meaning". The first example I thought was to talk about how you can draw too much meaning from things, and look at claims of the pyramids predicting the future. I got bogged down righting those examples, because "what can lead you to think meaning is there when it's not?" was not really what I was talking about, nor was it what I needed to talk about language. It is interesting though.

I'm getting better at noticing the feeling of being part way through an explanation and going "oh shit, this is wrong/not the right frame/isn't congruent with the last chapter/doesn't build to where I want". There have been times in the past when I thought that feeling was just pesky perfectionism.

Having an explicit purpose for each post is crazy helpful for deciding what does and doesn't go in.

Process

I'm hap-hazardously growing more of a process with writing. I've currently got an outline of the refactored version of HGTW with thought given to building concepts in the right order. Now I'm going down the outline and making the required posts.

I've started heading each post with a one-two sentences for me describing what the purpose of this post is. I then try to outline the post, and when I'm done or if I get stuck, I just start trying to write it out. This is "get it all out" don't even worry about connecting sentences, bail mid paragraph and start again. Rn I'm going on gut for switching in between outlining and orging and writing in between content. I'm getting much better on ditching stuff that I liked if I don't think it serves the purpose.

Oh, I'm also writing on work cycles (pomodoros with sprinkles). Breaks are stretching and staring out he window, great for not destroying my eyes and keeping my body from shriveling up and dying.

Musing on Ways I might better operationalize my writing

  • Stricter sense of audience
    • Or in the reverse framing, stricter sense of "this is my style and I'm sticking to it"
  • More intentionally entrain "purpose driven" writing?
    • Triggers: I'm getting bored. It feels hard to write. I have written anything in a minute. All my phrasings sound fake.
    • Action: "Aha! Friction, I noticed, thank you brain. Why was I trying to write that? Why does it feel weird? If this doesn't really matter, what does? I've I gotten to what matters yet?"
  • Can I productively work on writing in shorter chunks f time, or do can I really only do stuff in 3 1/2 hour chunks?
    • Yeah, this seems pretty important given that I want to continue writing all through the next semester/year/life.
    • I think it might be more useful to have more concrete mental buckets for stages of writing.
      • When I'm doing 6 cycles in a day, I start each cycle like "Cool, time to [clarify the middle section]" as opposed to "write more". It might be the case that "working on that blog post" might be to fuzzy to come to every day.
      • End each cycle by writing down the next step
    • Maybe a different mentality. In a given cycle, don't try to connect all the dots. Just explain a few dots. After a few days of having made some dots, then i might be able to connect them in one day.
comment by Hazard · 2019-10-01T23:24:11.747Z · LW(p) · GW(p)

I'm torn on WaitButWhy?s new series The Story of Us. My initial reaction was mostly negative. Most of that came from not liking the frame of Higher Mind and Primitive Mind, as that sort of thinking has been reasonable for a lot of hiccups for me, making "doing what I want" and unnecessarily antagonistic process. And then along the way I see plenty of other ways I don't like how he slices up the world.

The torn part: maybe this is sorta post "most people" need to start bridging the inferential gap [LW · GW] towards what I consider good epistemology? I expect most people on LW to find his series too simplistic, but I wonder if his posts would do more good than the Sequences for the average joe. As I'm writing this I'm acutely aware of how little I know about how "most people" think.

It also makes me think about how at some point in recent years I thought, "More dumbed down simplifications of crazy advanced math concepts should exist, to get more people a little bit closer to all the cool stuff there is." I guessed a mathematician might balk at this suggestion ("Don't tarnish my precious precision!") Am I reacting the same way?

I dunno, what do you think?

Replies from: romeostevensit, bgold, daniel-kokotajlo
comment by romeostevensit · 2019-10-06T22:39:55.807Z · LW(p) · GW(p)

Agree, seems like LW for normies circa ten plus years ago? Reaction for standard metacontrarian reasons, seeing past self in it.

comment by bgold · 2019-10-14T17:51:56.607Z · LW(p) · GW(p)

I'd like to see someone in this community write an extension / refinement of it to further {need-good-color-name}pill people into the LW memes that the "higher mind" is not fundamentally better than the "animal mind"

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-07T20:22:45.000Z · LW(p) · GW(p)

Yep, agreed. I want all my friends and family to read the series... and then have a conversation with me about the ways in which it oversimplifies and misleads, in particular the higher mind vs. primitive mind bit.

On balance though I think it's great that it exists and I predict it will be the gateway drug for a bunch of new rationalists in years to come.

comment by Hazard · 2019-08-18T18:52:22.864Z · LW(p) · GW(p)

Memex Thread:

I've taken copious notes in notebooks over the past 6 years, I've used evernote on and off as a capture tool for the past 4 years, and for the past 1.5 years I've been trying to organize my notes via a personal wiki. I'm in the process of switching and redesigning systems, so here's some thoughts.

Replies from: Hazard, Hazard, Hazard, Evan Rysdam
comment by Hazard · 2019-08-19T14:37:09.944Z · LW(p) · GW(p)

Concepts and Frames

Association, linking and graphs

A defining idea in this space is "Your memory works my association, get your note taking to mirror that." A simple version of this is what you have in a wiki. Every concept mentioned that has it's own page has a link to it. I'm a big fan of graph visualizations of information, and you could imagine looking at a graph of your personal wiki where edges are links. Roam embraces links with memory, all your notes know if they've been linked to and display this information. My idea for a memex tool to make really interesting graphs is to basically giving you free reign to make the type system of your nodes and edges, and give your really good filtering/search capacity on that type system. Basically a dope gui/text editor overtop of neo4j.

Personal Lit review

This is one way I frame what I want to myself. Sometimes I go "Okay, I want to rethink how I orient to loose-tie friendships." Then I remember that I've definitely thought about this before, but can't remember what I thought. This is the situation where I'd want to do a "lit review" of how I've attacked this issue in the past, and move forward in light of my history.

Just-in-time ideation

I take a shit ton of notes. Some are notes on what I'm reading, others are random ideas for jokes, projects, theories, arm chair philosophizing. Not all ideas should be, or can be acted upon right away, or at all (like "turn Spain into a tortilla"). But there is some possible future situation where it would be useful to have this idea brought to mind. My ideal memex would actually be a genie that remembers everything I've thought and written, follows me around, and constantly goes, "What would be useful for Hazard to remember right now?" This can be acted on in how you design your notes. Think, "What sort of situation would it be useful to remember this in? In that situation, what key words and phrases will be in my head? Include those in this note so they'll pop up in a search for those keywords."

Low friction capture everything

If you get perfectionist with your notes, you lose. This frame imagines your mind as a firehose of gold, and you want to capture all of it, and sort out what's good later. Record all ideas, no matter how crackpot. Carry a notebook, put your note taking app on your homescreen, set up your alexa to dictate notes, do whatever it takes. One principle that comes out of this frame is to be lax on hierarchy and organization. It should be as easy as possible to just capture an idea, with no regard for "where it goes". If I have to navigate a file tree and decide where a doc/note/brainstorm goes before I've even gotten it out, it might die. The extreme end is NO organization, all search. Tiago doesn't like that and suggests "no org on capture, and opportunistically organize and summarize and combine overtime".

Put EVERYTHING in your memex

This is embraced by Andrew Louis. This is also embraced by Notion, they want to me the one app you put everything in. I don't necessarily want on application that can do it all (text, tables, video, blah blah blah), but I DO want one memex command center where the existence of all data and files are recorded, and you can connect and interlink them. This is sorta like tagspace, they are a literally a wrapper around your file system, letting you tag, navigate, and add meta data to files for organizational purpose. I would LOVE if I had one "master file system memex", special features for text editing, and then specific applications in charge of any more specialized functionality.



comment by Hazard · 2019-08-19T14:07:11.616Z · LW(p) · GW(p)

People Talking about Memex stuff

Tiago Forte: Build a Second Brain (here's an introduction)

He's been on my radar for a year, and I've just started reading more of his stuff. Suspicion that he might be me from the future. He's all about the process and design of the info flow and doesn't sell a memex tool. Big ideas: find what you need when the time is right, new organic connections, your second brain should surprise you, progressive summarization.

Andrew Louis: I'm building a memex

This guy takes the memex as a way of life. Self-proclaimed digital packrat, he's got every chat log since highschool saved, always has his gps on and records that location, and basically pours all of his digital data into a massive personal database. He's been developing an app for himself (partially for others) to manage and interact with this. This goes waaaaaaaay beyond note taking. I'd binge more of his stuff if I wanted to get a sense for the emergent revelations that could come from intense memexing.

(check out his demo vid)

Conor: Roam

Conor both has a beta-product, and many ideas about how to organize ideas. Inspired by zettlekasten (post about zettlekasten, was the name of a physical note card system used by Niklas Luhmann). Check out his white paper for the philosophy

comment by Hazard · 2019-08-19T13:46:05.045Z · LW(p) · GW(p)

Products I've interacted with

Nuclino

Very cool. Mixes wiki, trello board, and graph centric views. Has all the nice content embedding, slash commands, etc. DOESN'T WORK OFFLINE :( (would be great otherwise)

Style/Inspiration: Wiki meets trello + extra.

Roam

Conor has been developing this with the Zettelkasten system as his inspiration. Biggest feature (in my mind) is "deep linking" things. You can link other notes to your note, and have them "expanded", and if you edit the deep linked note in a parent note, it actual edits the linked note. Also, notes keep track of every place there mentioned. Allows for powerful spiderwebby knowledge connection. I'm playing with the beta, still getting familiar and don't yet have much to say except that deep linking is exactly the feature I've always wanted and couldn't find.

Zim Wiki

Desktop wiki that works for linux. Nothing fancy, uses a simple markdown esque syntax, everything is text files. I used that for a year, now I'm moving away. 1 reason is I want more rich outlining powers like folding, but I'm also broadly moving away from framing my notes as a "personal wiki" for reasons I'll mention in another post.

PB Wiki

Just a wiki software. When I first decided to use a wiki to organize my school notes, I used this. It's an online tool which is --, but works okay as a wiki.

Emacs Org Mode

(what I'm currently using) Emacs is a magical extensible text editor, and org mode is a specific package for that editor. Org mode has great outlining capabilities, and unlimited possibilities for how you can customize stuff (coding required). The current thing that I'd really need for org mode to fit my needs is to be able to search my notes and see previews of them (think evernote search, you see the titles of notes, and a preview of the content). I think deft can get me this, haven't installed yet though. Long term, it seems emacs is appealing because it seems like I can craft my own workflow with precision. Will take work though. Not recommended if you want something that "Just works".

Evernote

Have used a lot over the years. Great for capture (it's on your phone and your desktop (but not linux [:(])). I've got several years of notes in there. I rarely build ideas in evernote though. This is a "works out of the box" app.


comment by Sunny from QAD (Evan Rysdam) · 2019-08-18T19:08:45.705Z · LW(p) · GW(p)

I really like the idea of a personal wiki. I've been thinking for a while about how I can track concepts that I like but that don't seem to be part of the zeitgeist. I might set up a personal wiki for it!

Replies from: eigen
comment by eigen · 2019-08-18T19:12:14.652Z · LW(p) · GW(p)

Yes! Thinking about it is a great idea.

Is there any particular open source software you use to set this up?

Replies from: William_Darwin, Evan Rysdam
comment by William_Darwin · 2019-08-19T00:49:57.778Z · LW(p) · GW(p)

I use GitBook.com, functions very well as a personal wiki (can link to other pages, categorise, etc)

comment by Sunny from QAD (Evan Rysdam) · 2019-08-18T19:19:22.834Z · LW(p) · GW(p)

IIRC, there is some kind of template software you can use to set up a basic wiki, kind of like how WordPress is a template software for a basic blog. If you google around you'll probably find it, if it exists.

comment by Hazard · 2019-08-14T14:21:40.961Z · LW(p) · GW(p)

Noticing an internal dynamic.

As a kid I liked to build stuff (little catapults, modify nerf guns, sling shots, etc). I entered a lot of those projects with the mindset of "I'll make this toy and then I can play with it forever and never be bored again!" When I would make the thing and get bored with it, I would be surprised and mildly upset, then forget about it and move to another thing. Now I think that when I was imagining the glorious cool toy future, I was actually imagining a having a bunch of friends to play with (didn't live around many other kids).

When I got to middle school and highschool and spent more time around other kids, the idea of "That person's talks like they're cool but they aren't." When I got into sub-cultures centering around a skill or activity (magic) I experienced the more concentrated form, "That person acts like they're good at magic, but couldn't do a show to save their life."

I got the message, "To fit in, you have to really be about the thing. No half assing it. No posing."

Why, historically, have I gotten so worried when my interests shift? I'm not yet at a point in my life where there are that many logistical constraints (I've switched majors three times in three years without a hitch). I think it's because in the back of my head I expect every possible group or social scene to say, "We only want you if you're all about doing XYZ all the time." And when I'm super excited about XYZ, it's fine. But when I feel like "Yeah, I need a break" I get nervous.

Yeah, there is a hard underlying problem of "How to not let your culture become meaningless", but I think my extra-problem is that I gravitated towards the groups that defined themselves by "We put in lots of time mastering this specific hard skill and applying it." Though I expect it to be the case that for the rest of my life I want to have thoughtful engaging discussion with intellectually honest people (a piece of what I want from less wrong), I feel less reason to be sure that I'll want to spend a large fraction of my time and life working on a specific skill/domain, like magic, or distributed systems.


Replies from: Viliam
comment by Viliam · 2019-08-14T22:25:07.502Z · LW(p) · GW(p)

Years ago, I wrote fiction, and dreamed about writing a novel (I was only able to write short stories). I assumed I liked writing per se. But I was hanging out regularly with a group of fiction fans... and when later a conflict happened between me and them, so that I stopped meeting them completely, I found out I had no desire left to write fiction anymore. So, seems like this was actually about impressing specific people.

I got the message, "To fit in, you have to really be about the thing. No half assing it. No posing."

I suspect this is only a part of the story. There are various ways to fit in a group. For example, if you are attractive or highly socially skilled, people will forgive you being mediocre at the thing. But if you are not, and you still want to get to the center of attention, then you have to achieve the extreme levels of the thing.

comment by Hazard · 2020-06-08T14:54:18.757Z · LW(p) · GW(p)

tldr;

In high-school I read pop cogSci books like "You Are Not So Smart" and "Subliminal: How the Subconscious Mind Rules Your Behavior". I learned that "contrary to popular belief", your memory doesn't perfectly capture events like a camera would, but it's changed and reconstructed every time you remember it! So even if you think you remember something, you could be wrong! Memory is constructed, not a faithful representation of what happened! AAAAAANARCHY!!!

Wait a second, a camera doesn't perfectly capture events. Or at least, they definitely didn't when this analogy was first made. Do you remember red eye? Instead of philosophizing on the metaphysics of representation, I'm just gonna note that "X is a construct!" sorts of claims cache out in terms of "you can be wrong in ways that matter to me!".

There's something funny about loudly declaring "it's not impossible to be wrong!"

In high-school, "gender is a social construct!" was enough of a meme that it wasn't uncommon for something to be called a social construct to express that you thought it was dumb.

Me: "God, the cafeteria food sucks!"

Friend: "Cafeteria food is a social construct!"

Calling something a social construct either meant "I don't like it" or "you can't tell me what to do". That was my limited experience with the idea of social constructs. Something I didn't have experience with was the rich feminist literature describing exactly how gender is constructed, what it's effects are, and how it's been used to shape and control people for ages.

That is way more interesting to me than just the claim "if your explanation involves gender, you're wrong". Similarly, these days the cogSci I'm reading is stuff like Predictive Processing theory, which posits that all of human perception is made through a creative construction process, and more importantly it gives a detailed description of the process that does this constructing.

For me, a claim that "X is a construct" of "X isn't a 100% faithful representation" can only be interesting if there's either an account of the forces that are trying to assert otherwise, or there's an account of how the construction works.

Put another way; "you can be wrong!" is what you shout at a someone who is insisting they can't be and is trying to make things happen that you don't like. Some people need to have that shouted at them. I don't think I'm that person. If there's a convo about something being a construct, I want to jump right to the juicy parts and start exploring that!

(note: I want to extra emphasize that it can be as useful to explore "who's insisting to me that X is infallible?" as it is to explore "how is this fallible?" I've been thinking about how your sense of what's happening in your head is constructed, noticed I want to go "GUYS! Consciousness IS A CONSTRUCT!" and when I sat down to ask "Wait, who was trying to insist that it 100% isn't and that it's an infallible access into your own mind?" I got some very interesting results.)

Replies from: rudi-c
comment by Rudi C (rudi-c) · 2020-06-13T12:11:31.035Z · LW(p) · GW(p)

I think you’re falling for the curse of knowledge. Most people are so naive that they do think their, e.g., vision is a “direct experience” of reality. The more simplistic books are needed to bridge the inferential gap.

Replies from: Hazard
comment by Hazard · 2020-06-14T16:06:15.199Z · LW(p) · GW(p)

I'm ignoring that gap unless I find out that a bulk of the people reading my stuff think that way. I'm more writing to what feels like the edge of interesting and relevant to me.

comment by Hazard · 2019-10-23T23:26:39.624Z · LW(p) · GW(p)

Over this past year I've been thinking more in terms of "Much of my behavior exists because it was made as a mechanism to meet a need at some point."

Ideas that flow out of this frame seem to be things like Internal Family Systems, and "if I want to change behavior, I have to actually make sure that need is getting met."

Question: does anyone know of a source for this frame? Or at least writings that may have pioneered it?

Replies from: romeostevensit, mr-hire
comment by romeostevensit · 2019-10-29T00:51:08.151Z · LW(p) · GW(p)

Psycho-cybernetics is an early text in this realm.

comment by Matt Goldenberg (mr-hire) · 2019-10-25T19:27:03.979Z · LW(p) · GW(p)

I think this has developed gradually. The idea of "behavior is based on unconscious desires" goes back as far as at least Freud, probably earlier.

Replies from: Hazard
comment by Hazard · 2019-10-26T20:52:40.013Z · LW(p) · GW(p)

Yeah. To home in more specifically, I'm looking at "All of your needs are legit". I've heard for a while "You have all these unconscious desires your optimizing for" and often followed with a "If only we could find a way to get rid of these desires." The new thing for me has been the idea that behind each of those "petty"/"base" desires there is a real valid need that is okay to have.

Replies from: George3d6
comment by George3d6 · 2019-10-27T08:07:23.425Z · LW(p) · GW(p)

That seems like a potentially very unhealthy thing when applied to "basic" desires such as food and sex... Unless yoloing your way through a life of hookers, coke (the sugary kind) and jelo seems appealing.

Our first order desires usually conflict with our long terms desires, and those are usually much better to aim for.

But maybe I'm getting something wrong here. Where did you get this idea from ?

Replies from: Hazard
comment by Hazard · 2019-10-27T14:05:20.762Z · LW(p) · GW(p)

The sentence "All your needs are legitimate" is pretty under-specified so I'll try to flush out the picture.

This gets a bit closer, "All your needs are legitimate, but not all of your strategies to meet those needs are legitimate." I can think there's nothing wrong with wanting sex, but there are still plenty of ways to meet that need which I'd fine abhorrent. "All your needs are legit" is not me claiming that any action you think to take is morally okay as long as it's an attempt to meet a need/desire. Another phrasing might be that I see a difference between, "I have a need for sporadic pleasurable experiences, and for consuming food so I don't die" and "Right now I want to go get a burger and a milkshake"

Another thing that shapes my frame is the claim that a lot of our behavior, even some that looks like it's just pursing "basic" things, sources from needs/desires like "needing to feel loved" "needing to feel like your aren't useless" etc. This extends to the tentative claim: "If more people had most of their emotional needs met, lots of people would be far less inclined to engage it stereotypical "hedonistic debauchery'"

Now to your "Where did this idea come from?" I don't remember when I first explicitly encountered this idea, but the most formative interaction might have been at CFAR a year ago. You mentioned "Our first order desires usually conflict with our long terms desires, and those are usually much better to aim for." I was investigating a lot of my 'long term desires' and other top-down frameworks I had to value parts of my life, and began to see how they had been carefully crafted to meet certain "basic" desires, like not being in situations where people would yell at me and never having to beg for attention. Many of my long term desires were actually strategies to meet various basic emotional needs, and they were also strategies that were causing conflicts with other parts of my life. My prior tendency was to go, "I'll just rebuke and disavow this strategy/desire (I didn't see the difference) and not make the mistake I was making"

The actionable and useful thing that the "All your needs are legitimate" gave me was previously, if I found a behavior was causing some problems, and I determined I was likely engaging in this behavior so that people would like me, I'd decide "Ha, needing to be liked is base and weak. I'll just axe this behavior." This would often lead to either mysteriously unsuccessful behavior change, or more internal anguish. Now I go, "It is completely okay and legit to want to be liked. I do in fact want that. Is there some way I can meet that need, but not incur the negatives that this behavior was producing?"


Replies from: George3d6
comment by George3d6 · 2019-10-27T15:31:26.720Z · LW(p) · GW(p)
All your needs are legitimate, but not all of your strategies to meet those needs are legitimate

Even in this form I don't believe this sentence holds.

For example, I am a smoke (well, vaper, but you get the point, nicotine user). I can guarantee you I have a very real need for:

a) Nicotine's effect on the brain

b) The throat hit nicotine gives

c) The physical "action" of smoking

Are those needs legitimate in the sense you seem to understand them ? Yes, they are pretty legitimate, or at least I can associate them to be on the same degree as other needs that most people would consider legitimate (e.g. need to take a piss, need to talk with a friend, w/e)

Must those needs stay legitimate ? No, actually, having taken breaks of up to half a year from the practice I can actually tell those needs get less relevant the longer you go without smoking

Should those needs stay legitimate ? Well, I'd currently argue "yes", since otherwise I wouldn't be vaping as I'm writing this. But, I'd equally argue that from a societal perspective the answer is "no", indeed, for parts of my brain (the ones that don't want to smoke), the answer is "no".

1. Now, either smoking is a legitimate need

OR

2. Some needs that "seem" legitimate should actually be supressed

OR

3. Needs not only need to "feel/seem" legitimate, they also need to have some other stamp of approval, such as being natural

1 - is a bad perspective to hold all things considered, you wouldn't teach your kid that you caught smoking that he should keep doing it because it's a legitimate need now that he kinda likes it.

2 - seem to counter-act your point because we can now claim any legitimate need should actually be suppressed rather than indulged in some way.

3 - You get into a nurture vs nature debate... in which case, I'm on the "you can't really tell" side for now and wouldn't personally go any further in that direction.

Replies from: Hazard
comment by Hazard · 2019-10-27T19:18:21.973Z · LW(p) · GW(p)

Okay, I agree that for "All your needs are legitimate...." the "all" part doesn't really seem to hold. Your example straightforwardly seems to address that. Stuff that's closer to "biological stuff we decent understanding of" (drugs, food) doesn't really fit the claim I was making.

I think you also helped me figure out a better way to express my sentiment. I was about to rephrase it as "All of your emotional needs are legit" but that feels like it's a me going down the wrong path. I'll try to explain why I wanted to phrase it that way in the first place.

I see the "standard view" as something like "Of course your emotions are important, but there are few unsavory feelings that just aren't acceptable and you shouldn't have them." I think I reached to quickly for "There is no such thing as unacceptable feelings" rather than "Here is why this specific feeling you are calling unacceptable actually is acceptable." I probably reached for that because it was easier.

Claim 1: The reasoning that proclaims a given emotional/social need is not legitimate is normally flawed.

(I could speak more to that, but it's sort of what I was mentioning at the end of my last comment)

I think this thing you mentioned is relevant.

Must those needs stay legitimate ? No, actually, having taken breaks of up to half a year from the practice I can actually tell those needs get less relevant the longer you go without smoking

I totally agree that something like smoking can have this "re-normalization" mechanism. Now I wonder what happens if we swap out the need for smoking with the need to feel like someone cares about you?

Claim 2: Ignored emotional/social needs will not "re-normalize" and will be a recurring source of pain, suffering, and problems.

The second claim seems like it could lead to very tricky debate. High-school-me would have insisted that I could totally just ignore my desire to be liked by people without ill consequences, because look at me, I'm doing it right now and everything's fine! I can currently see how this was causing me serious problems [LW · GW]. So... if someone said to me that they can totally just ignore things that I'd call emotional/social needs with no ill affects, I don't know how I'd separate it being true from it being the same as what I was going through.

Replies from: George3d6
comment by George3d6 · 2019-10-27T22:29:21.495Z · LW(p) · GW(p)
Claim 1: The reasoning that proclaims a given emotional/social need is not legitimate is normally flawed.
Claim 2: Ignored emotional/social needs will not "re-normalize" and will be a recurring source of pain, suffering, and problems.

I can pretty much agree with these claims.

I think it's worth breaking down emotional/social needs into lower-level entities than people usually do, e.g:

  • "I need to be in a sexual relationship with {X} even though they hate me" -- is an emotional need that's probably flawed
  • "I need to be in a sexual relationship" -- is an emotional need that's probably correct

***

  • "I need to be friends with {Y} even though they told me they don't enjoy my company" -- again, probably flawed
  • "I need to be friends with some of the people that I like" -- most likely correct

But then you reach the problem of where exactly you should stop the breakdown, as in, if your need is "too" generic once you reach its core it might make it rather hard to act upon. If you don't break them down at all you end up acting like a sitcom character without the laugh-track, wit and happy coincidences.

Also, whilst I disagree with your initial formulation:

All your needs are legitimate

I don't particularly see anything against:

There is no such thing as unacceptable feelings

But it seems from your reply that you hold them to be one and the same ?

Replies from: Hazard
comment by Hazard · 2019-10-28T23:22:38.716Z · LW(p) · GW(p)

In both of those examples you give I agree with you judgment of the needs.

If you switch "All your needs are legit" to "All your social/emotional needs are legit", then yeah, I was thinking of that and "There is no such things as unacceptable feelings" as the same thing. Though I can now see two distinct ideas that they could point to.

"All your S/E needs are legit" seems to say not only that it's okay to have the need, it's okay to do something to meet it. That's a bit harder to handle than just "It's okay to feel something." And yeah, there probably is some scenario where you could have a need that there's no way you could ethically meet, and that you can't breakdown into a need that can be met.

Another thing that I noticed informed my initial phrasing is I think that there is a strong sour grapes pressure to go from "I have this need, and I don't see anyway to get it met that I'm okay with" to "Well then this is a silly need and I don't even really care about it."

You've sparked many more thoughts from me on this, and I think those will come in a post sometime later. Thanks for prodding!


comment by Hazard · 2019-03-23T22:52:08.737Z · LW(p) · GW(p)

The general does not exist, there are only specifics.

If I have a thought in my head, "Texans like their guns", that thought got there from a finite amount of specific interactions. Maybe I heard a joke about texans. Maybe my family is from texas. Maybe I hear a lot about it on the news.

"People don't like it when you cut them off mid sentence". Which people?

At a local meetup we do a thing called encounter groups, and one rule of encounter groups is "there is no 'the group', just individual people". Having conversations in that mode has been incredibly helpful to realize that, in fact, there is no "the group".

Replies from: clone of saturn
comment by clone of saturn · 2019-03-24T00:27:08.227Z · LW(p) · GW(p)

But why stop at individual people? This kind of ontological deflationism can naturally be continued to say there are no individual people, just cells, and no cells, just molecules, and no molecules, just atoms, and so on. You might object that it's absurd to say that people don't exist, but then why isn't it also absurd to say that groups don't exist?

Replies from: Hazard
comment by Hazard · 2019-04-01T01:27:35.292Z · LW(p) · GW(p)

The idea was less "Individual humans are ontologically basic" and more: I see I often talking about broad groups of people has been less useful than dropping down to talk about interactions I've had with individual people.

In writing the comment I was focusing more on what the action I wanted to take was (think about specific encounters with people when evaluating my impressions) and less my my ontological claims of what exists. I see how me lax opening sentence doesn't make that clear :)

comment by Hazard · 2019-03-21T02:15:05.297Z · LW(p) · GW(p)

What are the barriers to having really high "knowledge work output"?

I'm not capable of "being productive on arbitrary tasks". One winter break I made a plan to apply for all the small $100 essay scholarships people were always telling me no one applied for. After two days of sheer misery, I had to admit to myself that I wasn't able be productive on a task that involved making up bullshit opinions about topics I didn't care about.

Conviction is important. From experiments with TAPs and a recent bout of meditation, it seems like when I bail on an intention, on some level I am no longer convinced the intention is a good idea/what I actually want to do. Strong conviction feels like confidence all the way up in the fact that this task/project is the right thing to spend your time on.

There's probably a lot in the vein of have good chemistry: sleep well, eat well, get exercise.

One of the more mysterious quantities seems to be "cognitive effort". Sometimes thinking hard feel like it hurts my brain. This [LW · GW] post has a lot of advice in that regard.

I've previously hypothesized that the a huge chunk of painful brain fog is the experience of thinking at a problem, but not actually engaging with it. (similar to how Mark Forster has posited that the resistance one feels to a given task is proportional to how many times it has been rejected)

Having the rest of your life together and time boxing your work is insanely important for reducing the frequency with which your brains promotes "unrelated" thoughts to your consciousness (if there's important stuff that isn't getting done, and you haven't convinced yourself that it will be handled adequately, your mind's tendency is to keep it in a loop).

I've got a feeling that there's a large amount of gains in the 5-second [LW · GW] level [LW · GW]. I would be super interested in seeing anyone's thoughts or writings on the 5-second level of doing better work and avoiding cognitive fatigue.

Replies from: Hazard
comment by Hazard · 2019-03-23T22:47:03.207Z · LW(p) · GW(p)

(Less a reply and more just related)

I often think a sentence like, "I want to have a really big brain!". What would that actually look like?

  • Not experiencing fear or worry when encountering new math.
  • Really quick to determine what I'm most curious about.
  • Not having my head hurt when I'm thinking hard, and generally not feeling much "cognitive strain".
  • Be able to fill in the vague and general impressions with the concrete examples that originally created them.
  • Doing a hammers and nails scan when I encounter new ideas.
  • Having a clear, quickly accessible understanding of the "proof chains" of ideas, as well as the "motivation chains".
    • I don't need to know all the proofs or motivations, but I do have a clear sense of what I understand myself, and what I've outsourced.
  • Instead of feeling "generally confused" by things of just "not getting them", I always have concrete, "This doesn't make sense because BLANK" expressions that allow me to move forward.
comment by Hazard · 2019-01-27T02:36:37.610Z · LW(p) · GW(p)

Concrete example: when I'm full, I'm generally unable to imagine meals in the future as being pleasurable, even if I imagine eating a food I know I like. I can still predict and expect that I'll enjoy having a burger for dinner tomorrow, but if I just stuffed myself on french fries, and just can't run a simulation of tomorrow where the "enjoying the food experience" sense is triggered.

I take this as evidence for my internal food experience simulator has "code" that just asks, "If you ate XYZ right now, how would it feel?" and spitting back the result.

This makes me wonder how many other mental systems I have that I think of as "Trying to imagine how I'd feel in the future" are really just predicting how I'd feel right now.

More specifically, the fact that I literally can't do a non-what-im-feeling-right-now food simulation makes me expect that I'm currently incapable of predicting future feelings in certain domains.

comment by Hazard · 2018-03-02T19:31:40.354Z · LW(p) · GW(p)

I'm in the process of turning this thought into a full essay.

Ideas that are getting mixed together:

Cached thoughts, Original Seeing, Adaption Executors not Fitness Maximizes, Motte and Bailey, Double Think, Social Improv Web.

  • A mind can perform original seeing (to various degrees), and it can also use cached thoughts.
    • Cache thoughts are more “Procedural instruction manuals” and original seeing is more “Your true anticipations of reality”.
  • Both reality and social reality (social improv web) apply pressures and rewards that shape your cached thoughts.
  • It often looks like people can be said to have motives/agendas/goals, because their cached thoughts have been formed by the pressures of the social improv web.
    • Ex. Tom has a cached thought, the execution of which results in “People Like Tom”, which makes it look reasonable to assert “Tom’s motives are for people to like him”.
  • People are Cached-thought-executors, not Utility-maximizers/agenda-pursuers.
  • One can switch from acting from cached thoughts, to acting from original seeing without ever realizing a switch happened.
    • Motte and bailey doesn’t have to be intentional.
  • When talking with someone and applying pressure to their beliefs, it no longer becomes effective to chase down their “motives”/cached thoughts, because they’ve switched to a weak form of original seeing, and in that moment effectively no longer have the “motives” they had a few moments ago.
  • Tentatively dubbing this the Schrodinger’s Agenda.

Replies from: Raemon, Hazard
comment by Raemon · 2018-03-04T15:20:22.962Z · LW(p) · GW(p)

Just wanted to say I liked the core insight here (that people seem more-like-hidden-agenda executors when they're running on cached thoughts). I think it probably makes more sense to frame it as a hypothesis than a "this is a true thing about how social reality and motivation work", but a pretty good hypothesis. I'd be interested in the essay exploring what evidence. might falsify it or reinforce it.

(This is something that's not currently a major pattern among rationalist thinkpieces on psychology but probably should be)

Replies from: Hazard
comment by Hazard · 2018-03-04T20:31:53.777Z · LW(p) · GW(p)

hmmmmm, ironically my immediate thought was, "Well of course I was considering it as a hypothesis which I'm examining the evidence for", though I'd bet that the map/territory separation was not nearly as emphasized in my mind when I was generating this idea.

Yeah, I think your framing is how I'll take the essay.

comment by Hazard · 2018-03-21T20:02:12.482Z · LW(p) · GW(p)

Here's a more refined way of pointing out the problem that the parent comment was addressing:

  • I am a general intelligence that emerged running on hardware that wasn't intelligently designed for general intelligence.
  • Because of the sorts of problems I'm able to solve when directly applying my general intelligence (and because I don't understand intelligence that well), it is easy to end up implicitly believing that my hardware is far more intelligent than it actually is.
  • Examples of ways my hardware is "sub-par":
    • It don't seem to get automatic belief propagation.
    • There doesn't seem to be strong reasons to expect that all of my subsystems are guaranteed to be aligned with the motives that I have on a high level.
  • Because there are lots of little things that I implicitly believe my hardware does, which it does not, there are a lot of corrective measures I do not take to solve the deficiencies I actually have.
  • It's completely possible that my hardware works in such a way that I'm effectively working on different sets of beliefs and motives and various points in time, and I have a bias towards dismissing that because, "Well that would be stupid, and I am intelligent."

Another perspective. I'm thinking about all of the examples from the sequences of people near Eliezer thinking that AI's would just do certain things automatically. It seems like that lens is also how we look at ourselves.

Or it could humans are not automatically strategic, but on steroids. Humans do not automatically get great hardware.

comment by Hazard · 2020-12-14T23:53:36.209Z · LW(p) · GW(p)

I started writing on LW in 2017, 64 posts ago. I've changed a lot since then, and my writing's gotten a lot better, and writing is becoming closer and closer to something I do. Because of [long detailed personal reasons I'm gonna write about at some point] I don't feel at home here, but I have a lot of warm feelings towards LW being a place where I've done a lot of growing :)

Replies from: Benito
comment by Ben Pace (Benito) · 2020-12-14T23:55:03.752Z · LW(p) · GW(p)

I'm glad about your growth here :)

comment by Hazard · 2019-05-06T21:55:38.401Z · LW(p) · GW(p)

A forming thought on post-rationality. I've been reading more samzdat lately and thinking about legibility and illegibility. Me paraphrasing one point from this post:

State driven rational planning (episteme) destroys local knowledge (metis), often resulting in metrics getting better, yet life getting worse, and it's impossible to complain about this in a language the state understands.

The quip that most readily comes to mind is "well if rationality is about winning [LW · GW], it sounds like the state isn't being very rational, and this isn't a fair attack on rationality itself" (this [LW(p) · GW(p)] comment quotes a similar argument).

Similarly, I was having a conversation with two friends once. Person A expressed that they were worried if they started hanging around more EA's and rationalists, they might end up having a super boring optimized life and never do fun things like cook meals with friends (because soylent) or go dancing. Friend B expressed, "I dunno, that sounds pretty optimal to me."

I don't think friend A was legitimately worried about the general concept of optimization. I do think they were worried about what they expected there implementation (or their friends implementation) of "optimality" in their own lives.

Current most charitable claim I have of the post-rationalist mindset: the best and most technical specifications that we have for what things like optimal/truth/rational might look like contain very little information about what to actually do. In your pursuit of "truth"/"rationality"/"the optimal" as it pertains to your life, you will be making up most of your art along the way, not deriving it from first principles. Furthermore, thinking in terms of the truth/rationality/optimality will [somehow] lead you to make important errors you wouldn't have made otherwise.

A more blase version of what I think the post rationalist mindset is: you can't handle the (concept of the) truth.

Replies from: Hazard
comment by Hazard · 2019-07-31T14:37:07.423Z · LW(p) · GW(p)

Epistemic status: Some babble, help me prune.

My thoughts on the basic divide between rationalist and post-rationalists, lawful thinkers and toolbox thinkers [LW · GW].

Rat thinks: "I'm on board with The Great Reductionist Project [LW · GW], and everything can in theory be formalized."

Post-Rat hears: "I personally am going to reduce love/justice/mercy and the reduction is going to be perfect and work great."

Post-Rat thinks: "You aren't going to succeed in time / in a manner that will be useful for doing anything that matters in your life."

Rat hears: "It's fundamentally impossible to reduce love/justice/mercy and no formalism of anything will do any good."

Newcomb's Problem

Another way I see the difference is that the post-rats look at Newcomb's problem and say "Those causal rationalist losers! Just one-box! I don't care what your decision theory says, tell your self whatever story you need in order to just one-box!" The post-rats rally against people who are doing things like two-boxing because "it's optimal".

The most indignant rationalists are the one's who took the effort to create whole new formal decision theories that can one-box, and don't like that the post-rats think they'd be foolish enough to two-box just because a decision theory recommends it. While I think this gets the basic idea across, this example is also cheating. Rats can point to formalism that do one-box, and in LW circles there even seem to be people who have worked the rationality of one-boxing deep into their minds.

Hypothesis: All the best rationalists are post-rationalists, they also happen to care enough about AI Safety that they continue to work diligently on formalism.

Replies from: habryka4
comment by habryka (habryka4) · 2019-07-31T15:13:46.911Z · LW(p) · GW(p)

Alternative hypothesis: Post-rationality was started by David Chapman being angry at historical rationalism. Rationality was started by Eliezer being angry at what he calls "old-school rationality". Both talk a lot about how people misuse frames, pretend that rigorous definitions of concepts are a thing, and broadly don't have good models of actual cognition and the mind. They are not fully the same thing, but most of the time I talked to someone identifying as "postrationalist" they picked up the term from David Chapman and were contrasting themselves to historical rationalism (and sometimes confusing them for current rationalists), and not rationality as practiced on LW.

Replies from: Hazard
comment by Hazard · 2019-07-31T17:01:27.213Z · LW(p) · GW(p)

I'd buy that.

Any idea what a good recent thing/person/blog example of embodying that historical rationalist mindset? The only context I have for the historical rationalist a is Descartes, and I have not personally seen anyone who felt super Descartes-esque.

Replies from: habryka4
comment by habryka (habryka4) · 2019-07-31T18:26:59.690Z · LW(p) · GW(p)

The default book that I see mentioned in conversation that explains historical rationalism is “Seeing like a state” though I have not read the whole book myself.

Replies from: Hazard
comment by Hazard · 2019-07-31T19:43:35.687Z · LW(p) · GW(p)

Cool. My back of the mind plan is "Actually read the book, find big names in the top down planning regimes, see if they've written stuff" for whenever I want to replace my Descartes stereotype with substance.

comment by Hazard · 2018-03-15T13:17:49.988Z · LW(p) · GW(p)

Sometimes when I talk to friends about building emotional strength/resilience, they respond with "Well I don't want to become a robot that doesn't feel anything!" to paraphrase them uncharitably.

I think wolverine is a great physical analog for how I think about emotional resilience. Every time wolverine gets shot/stabbed/clubbed it absolutely still hurts, but there is an important way in which these attacks "don't really do anything". On the emotional side, the aim is not that you never feel a twinge of hurt/sorrow/jealousy etc. but that said pain is felt, and nothing more happens besides that twinge of pain (unless those emotions held information that would be useful to update on).

Likewise, though I'm not really a marvel buff, I'm assuming wolverine can still die. Though he can heal crazy fast, it's still conceivable that he could be physically assaulted in such a way that he can't recover. Same for the emotions side. I'm sure that for most emotionally resilient people there is some conceivable, very specific idiosyncratic scenario that could "break them".

That doesn't change the fact that you're a motherfucking bad-ass with regenerative powers and can take on most threats in the multiverse.

Replies from: Viliam
comment by Viliam · 2019-08-14T22:13:39.443Z · LW(p) · GW(p)

Maybe emotional resilience is bad for some forms of signaling. The more you react emotionally, the stronger you signal that you care about something. Keeping calm despite feeling strong emotions can be misinterpreted by others as not caring.

Misunderstandings created this way could possibly cause enough harm to outweigh the benefits of emotional resilience. Or perhaps the balance depends on some circumstances, e.g. if you are physically strong, people will be naturally afraid to hurt you, so then it is okay to develop emotional resilience about physical pain, because it won't result in them hurting you more simply because "you don't mind it anyway".

Replies from: Richard_Kennaway, Kaj_Sotala
comment by Richard_Kennaway · 2019-10-07T14:04:36.733Z · LW(p) · GW(p)

That problem should be addressed by better mastery over one's presentation, not by relinquishing mastery over one's emotions.

comment by Kaj_Sotala · 2019-10-07T10:25:03.351Z · LW(p) · GW(p)
Keeping calm despite feeling strong emotions can be misinterpreted by others as not caring.

To some extent, the interpretation is arguably correct; if you personally suffer from something not working out, then you have a much greater incentive to actually ensure that it does work out. If a situation going bad would cause you so much pain that you can't just walk out from it, then there's a sense in which it's correct to say that you do care more than if you could just choose to give up whenever.

comment by Hazard · 2019-09-09T17:29:41.022Z · LW(p) · GW(p)

Quick description of a pattern I have that can muddle communication.

"So I've been mulling over this idea, and my original thoughts have changed a lot after I read the article, but not because of what the article was trying to persuade me of ..."

Genera Pattern: There is a concrete thing I want to talk about (a new idea - ???). I don't tell what it is, I merely provide a placeholder reference for it ("this idea"). Before I explain it, I begin applying a bunch of modifiers (typically by giving a lot of context "This idea is a new take on a domain I've previously had thoughts on" "there was an article involved in changing my mind" "that article wasn't the direct cause of the mind change")

This confuses a lot of people. My guess is that interpreting statements like this require a lot more working memory. If introduce the main subject, and then modify it, people can "mentally modify" the subject as I go along. If I don't give them the subject, they need to store a stack of modifiers, wait until I get to the subject, and then apply all those modifiers they've been storing.

I notice I do this most when I expect the listener will have a negative gut reaction to the subject, and I'm trying to preemptively do a bunch of explanation before introducing it.

Anyone notice anything similar?

Replies from: jimrandomh, habryka4
comment by jimrandomh · 2019-09-09T23:14:41.750Z · LW(p) · GW(p)

Yep, I notice this sometimes when other people are doing it. I don't notice myself doing it, but that's probably because it's easier to notice from the receiving end.

In writing, it makes me bounce off. (There are many posts competing for my attention, so if the first few sentences fail to say anything interesting, my brain assumes that your post is not competitive and moves on.) In speech, it makes me get frustrated with the speaker. If it's in speech and it's an interruption, that's especially bad, because it's displacing working memory from whatever I was doing before.

comment by habryka (habryka4) · 2019-09-09T21:26:49.245Z · LW(p) · GW(p)

I also do this a lot, and think it's not always a mistake, but I agree that it imposes significant cognitive burden on my conversational partner. 

Replies from: Hazard
comment by Hazard · 2019-09-10T01:27:28.530Z · LW(p) · GW(p)

Do you also do it as a preemptive move like I described, or for other reasons?

comment by Hazard · 2019-07-25T17:06:59.155Z · LW(p) · GW(p)

Ribbon Farm captured something that I've felt about nomadic travel. I'm thinking back to a 2 month bicycle trip I did through Vietnam, Cambodia, and Laos. During that whole trip, I "did" very little. I read lots of books. Played lots of cards. Occasionally chat with my biking partner. "Not much". And yet when movement is your natural state of affairs, every day is accompanied with a feeling of progress and accomplishment.

comment by Hazard · 2018-08-10T18:36:16.178Z · LW(p) · GW(p)

I love the experience of realizing what cognitive algorithm I'm running in a given scenario. This is easiest to spot when I screw something up. Today, I misspelled the word "process" by writing three "s" instead of two. I'm almost certain that while writing the word, there was a cached script of "this word has one more 's' than feels write, so add another one" that activated as I wrote the 1st "s", but then some idea popped into my mind (small context switch, working memory dump?) and I then executed "this word has one more 's' than feels write, so add another one" an extra time.

I don't spell the word the "process" correctly by having memorized the correct spelling. I spell the word correctly by doing a memorized improper spelling and triggering a bug-patch script, which if my attention shifts, can cause a bug where that patch script runs twice. It's awe-inspiring to know that a bulk of my cognition is probably this sort of bug-patch, hacky, add on code.

I don't expect to gain anything from this particular insight, but I love noticing these sorts of things. I intend to get better at this sort of noticing.

comment by Hazard · 2018-02-18T14:11:30.239Z · LW(p) · GW(p)

Something as simple as talking too loud can completely screw you over socially. There's a guy in one of my classes who talks at almost a shouting level when he asks questions, and I can feel the rest of the class tense up. I'd guess he's unaware of it, and this is likely a way he's been for many years which has subtlety/not so subtlety pushed people away from him.

Would it be a good idea to tell him that a lot of people don't like him because he's loud? Could I package that message such that it's clear I'm just trying to give him useful info, as opposed to trying to insult him?


This seems like the sort of problem where most of the time, no one will bring it up to him, unless they reach a "breaking point" in which case they'd tell him he's too loud via a social attack. It seems like there might be a general solution to this sort of conundrum.

comment by Hazard · 2020-12-04T00:08:08.340Z · LW(p) · GW(p)

To everyone on the LW team, I'm so glad we do the year in review stuff! Looking over the table of contents for the 2018 book I'm like "damn, a whole list of bangers", and even looking at top karma for 2019 has a similar effect. Thanks for doing something that brings attention to previous good work.

Replies from: Benito
comment by Ben Pace (Benito) · 2020-12-04T00:11:43.595Z · LW(p) · GW(p)

You're welcome :)

I'm loving reading yours and everyone's nominations, it's really great to hear about what people found valuable.

comment by Hazard · 2019-11-29T19:25:07.256Z · LW(p) · GW(p)

I've been having fun reading through Signals: Evolution, Learning, & Information. Many of the scenarios revolve around variations of the Lewis Signalling Game. It's a nice simple model that lets you talk about communication without having to talk about intentionality (what you "meant" to say).

Intention seems to mostly be about self-awareness of the existing signalling equilibrium. When I speak slowly and carefully, I'm constantly checking what I want to say against my understanding of our signalling equilibrium, and reasoning out implications. If I scream when I see a tiger, I'm still signalling, but various facts about the signalling equilibrium are not booted into consciousness.

So, claim: Lewis style signalling games are the root of all communication, from humans to dogs to bacteria. The "extra" stuff that humans seem to have which is often called intent as to do with having other/additional reasoning abilities, and being able to load one's signalling equilibrium into that reasoning system to further engage in shenanigans.

comment by Hazard · 2019-11-24T18:21:47.609Z · LW(p) · GW(p)

"Moving from fossil fuels to renewable energy" but as a metaphor for motivational systems. Nate Soares replacing guilt seems to be trying to do this.

With motivation, you can more easily go, "My life is gonna be finite. And it's not like someone else has to deal with my motivation system after I die, so why not run on guilt and panic?"

Hmmmm, maybe something like, "It would be doper if large scale people got to more renewable motivational systems, and for that change to happen it feels important for people growing up to be able to see those who have made the leap."

comment by Hazard · 2019-09-19T17:01:35.157Z · LW(p) · GW(p)

Reverse-Engineering a World View

I've been having to do this a lot for Ribbonfarm's Mediocratopia blog chain. Rao often confuses me and I have to step up my game to figure out where he's coming from.

It's basically a move of "What would have to be different for this to make sense?"

Confusion: "But if you're going up in levels, stuff must be getting harder, so even though you're mediocre in the next tier, shouldn't you be loosing slack, which is antithetical to mediocrity?"

Resolution: "What if there's weird discontinuous jumps to both skill and performance, and taking on a new frame/strategy/practice bumps you to the next level, without your effort going up proportionally?"



comment by Hazard · 2019-08-13T15:37:18.822Z · LW(p) · GW(p)

[Everything is "free" and we inundate you in advertisements] feels bad. First thought alternative is something like paid subscriptions, or micropayments per thing consumed. But the question is begged, how does anyone find out about the sites they want to subscribe to? If only there was some website aggregator that was free for me to use so that I could browse different possible subscriptions...

Oh no. Or if not oh no, it seems like the selling eyeballs model won't go away just because alternatives exist, if only from the "people need to somehow find out about the thing they are paying for" side.

I could probably do with getting a stronger sense of why selling eyeballs feels bad. I'm also probably thinking about this too abstractly and could do with getting more concrete.

Replies from: William_Darwin
comment by William_Darwin · 2019-08-13T19:30:24.679Z · LW(p) · GW(p)

Maybe it has something to do with the sentiment that "if it's free, the product is you". Perhaps without paying some form of subscription, you feel that there is no 'bounded' payment for the service - as you consume more of any given service, you are essentially paying more (in cognitive load or something similar?).

Kind of feels like fixed vs variable costs - often you feel a lot better with fixed as it tends to be "more valuable" the more you consume.

Just an off-the-cuff take based on personal experience, definitely interested in hearing other takes.

comment by Hazard · 2018-03-04T14:30:45.390Z · LW(p) · GW(p)

The university I'm at has meal plans were you get a certain number of blocks (meal + drink + side). These are things that one has, and uses them to buy stuff. Last week at dinner, I gave the cashier my order and he said "Sorry man, we ran out of blocks." If I didn't explain blocks well enough, this is a statement that makes no sense.

I completely broke the flow of the back and forth and replied with a really confused, "Huh?" At that point the guy and another worker started laughing. Turns out they'd been coming up with nonsensical lines and seeing how many people they would fly past.

Moral of this story, I think the only reason I noticed my confusion and didn't do mental gymnastics to "make it make sense" was because I was really tired. Yep, the greatest weapon I wield against the pressures of social reality is my desire to go to bed.

comment by Hazard · 2020-01-05T14:33:06.023Z · LW(p) · GW(p)

I've been writing on twitter more lately. Sometimes when I'm trying to express and idea, to generate progress I'll think "What's the shortest sentence I can write that convinces me I know what I'm talking about?" This is different from "What's a simple but no simpler explanation for the reader?"

Starting a twitter thread and forcing several tweet sized chunk of ideas out are quite helpful for that. It helps get the concept clearer in my head, and then I have something out there and I can dwell on how I'd turn it into a consumable for others.

Replies from: Hazard
comment by Hazard · 2020-02-28T06:31:21.237Z · LW(p) · GW(p)

I've been writing A LOT on twitter lately. It's been hella fun.

One thing that seems clear. Twitter threads are not the place to hash out deep disagreements start to finish. When you start multi threading, it gets chaotic real fast, and the character limit is a limiting force.

On the other side of things, it's feels great for gestating ideas, and getting lots of leads on interesting ideas.

1) Leads: It helps me increase my "known unknowns". There's a lot of topics, ideas, disciplines I see people making offhand comments about, and while it's rarely enough to piece together the whole idea, I often can pick up the type signature and know where the idea relates to other ideas I am familiar with. This is dope. Expand you anti-library

2): gestation: there's a limit to how much you can squeeze into a single tweet, but threading really helps to shotgun blast out ideas. It often ends up being less step-by-step carefully reasoned arg, and more lots of quasi-independent thoughts on the topic that you then connect. Also, I easily get 5x engagement on twitter, and other people throwing in their thoughts is really helpful.

I know Raemon and crew have mentioned trying to help with more gestation and development of ideas (without sacrificing overall rigor). post-rat-twitter / strangely-earnest-twitter feels like it's nailed the gestation part. Might be something to investigate.

Replies from: Hazard
comment by Hazard · 2020-02-28T06:33:31.872Z · LW(p) · GW(p)

See this for the best example of rapid brainstorming, and the closest twitter has to long form content.

comment by Hazard · 2019-11-29T03:24:28.663Z · LW(p) · GW(p)

Re Mental Mountains [LW · GW], I think one of the reasons that I get worried when I meet another youngin that is super gung-ho about rationality/"being logical and coherent", is that I don't expect them to have a good Theory of How to Change Your Mind. I worry that they will reason out a bunch of conclusions, succeed in high-level changing their minds, think that they've deeply changed their minds, but instead leave hoards of unresolved emotional memories/models that they learn to ignore [LW · GW] and fuck them up later.

comment by Hazard · 2019-10-30T23:10:20.964Z · LW(p) · GW(p)

Weird hack for a weird tick. I've noticed I don't like audio abruptly ending. Like, sometimes I've listened to an entire podcast on a walk, even when I realized I wasn't into it, all because I anticipated the twinge of pain from turning it off. This is resolved by turning the volume down until it is silent, and then turning it off. Who'd of thunk it...

comment by Hazard · 2019-04-01T01:22:38.055Z · LW(p) · GW(p)

Me circa March 2018

"Should"s only make sense in a realm where you are divorced form yourself. Where you are bargaining with some other being that controls your body, and you are threatening it.

Update: This past week I've had an unusual amount of spontaneous introspective awareness on moments when I was feeling pulled my a should, especially one that came from comparing myself from others. I've also been meeting these thoughts with an, "Oh interesting. I wonder why this made me feel a should?" as opposed to a standard "endorse or disavow" response.

Meta Thoughts [LW(p) · GW(p)]: What do I know about "should"s that I didn't know in March 2018?

I'm more aware of how incredibly pervasive "should"s are in my thinking. Last saturday alone I counted over 30 moments of feeling the negative tug of some "should".

I know see that even for things I consider cool, dope, and virtuous, I've been using "you should do this or else" to get myself to do them.

Since CFAR last fall [LW · GW] I've gained a lot of metis on aligning myself, a task that I've previously trivialized or brought in "willpower" to conquer. Last year I was more inclined to go, "Well okay fine, I'm still saying I should do XYZ, but the part of me that is resisting that is actually just stupid and deserves to be coerced."

comment by Hazard · 2019-02-03T14:09:49.051Z · LW(p) · GW(p)

From Gwern's about page:

I personally believe that one should think Less Wrong and act Long Now, if you follow me.

Possibly my favorite catch-phrase ever :) What do I think is hiding there?

  • Think Less Wrong
    • Self anthropology- "Why do you believe what you believe?"
    • Hugging the Query and not sinking into confused questions
    • Litany of Tarski
    • Notice your confusion - "Either the story is false or you model is wrong"
  • Act Long Now
    • Cultivate habits and practice routines that seem small / trivial on a day/week/month timeline, but will result in you being superhuman in 10 years.
    • Build abstractions where you are acutely aware of where it leaks, and have good reason to believe that leak does not affect the most important work you are using this abstraction for.
    • What things trigger "Man, it sure would be useful helpful if I had data on XYZ from the past 8 years"? Start tracking that.
Replies from: Hazard
comment by Hazard · 2019-12-04T14:32:37.749Z · LW(p) · GW(p)

What am I currently doing to Act Long Now? (Dec 4th 2019)

  • Switching to Roam: Though it's still in development and there are a lot of technical hurdles to this being a long now move (they don't have good import export, it's all cloud hosted and I can't have my own backups), putting ideas into my roam network feels like long now organization for maximized creative/intellectual output over the years.
  • Trying to milk a lot of exploration out of the next year before I start work, hopefully giving myself springboards to more things at points in the future where I might not have had the energy to get started / make the initial push.
  • Being kind.
  • Arguing Politics* With my Best Friends [LW · GW]

What am I currently doing to think Less Wrong?

  • Writing more has helped me hone my thinking.
  • Lot's of progress on understanding emotional learning [? · GW] (or more practically, how to do emotional unlearning) allowing me to get to a more even keeled center from which to think and act.
  • Getting better at ignoring the bottom line [LW · GW] to genuinely consider what the world would be like for alternative hypothesis.
Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-12-04T22:35:10.002Z · LW(p) · GW(p)

This is a great list! I'd be curious about things you are currently doing to act short now and think more wrong as well. I often find I get a lot out of such lists.

Replies from: Hazard
comment by Hazard · 2019-12-04T23:32:25.791Z · LW(p) · GW(p)

Act Short Now

  • Sleeping in
  • Flirting more

Think More Wrong

  • I longer buy that there's a structural difference between math/the formal/a priori and science/the empirical/ a posteriori.
  • Probability theory feels sorta lame.
comment by Hazard · 2018-11-04T13:37:47.843Z · LW(p) · GW(p)

Claim: There's a headspace you can be in where you don't have a bucket [LW · GW] for explore/babble [? · GW]. If you are entertaining an idea or working through a plan, it must be because you already expect it to work/be interesting. If your prune filter is also growing in strength and quality, then you will be abandoning ideas and plans as soon as you see any reasonable indicator that they won't work.

Missing that bucket and enhancing your prune filter might feel like you are merely growing up, getting wiser, or maybe more cynical. This will be really strongly felt if the previous phase in your life involved you diving into lots of projects only to realize some time and money later that they won't work out. The mental motion of, "Aha! This plan leaves ABC completely unspecified and I'd probably fall apart when reaching that roadblock," will be accompanied by a, "Man, I'm so glad I noticed that, otherwise I would have wasted a whole day/week/month. Go prune!".

Until you get a new bucket for explore, and attempts to get you to "think big" and "get creative" and "let it all out in a brainstorm" will feel like attacks on your valuable time. Somehow, you need to get a strong felt sense for explore being it's own, completely viable option, which in no way obliges you to act on what you've explored.

Next thoughts: What is needed for me to deeply feel explore as an option, and what things might be stopping me from doing so? *tk*

comment by Hazard · 2018-10-26T20:44:02.692Z · LW(p) · GW(p)

You can have infinite aspirations, but infinite plans are often out to get you.

When you make new plans, run more creative "what if?" inner-sims, sprinkle in more exploit, and ensure you have bounded loss if things go south.

When you feel like quitting, realize you have the opportunity to learn and update being asking, "What's different between now and when I first made this plan?"

Make your confidence in your plans explicit, so if you fail you can be surprised instead of disappointed.

If the thought of giving up feels terrible, you might need to learn how to lose.

And of course, if you can't afford to lose,

![](https://i.imgur.com/80acRCF.jpg)

comment by Hazard · 2018-07-29T15:06:41.545Z · LW(p) · GW(p)

Stub Post: Thoughts on why it can be hard to tell if something is hindsight bias or not.

Imagine one's thought process as an idea-graph, with the process of thinking being hopping around nodes. Your long term memory can be thought of as the nodes and edges that are already there and persist strongly. The contents of your working memory are like temporary nodes and edges that are in your idea graph, and everything that is close to them gets a +10 to speed-of-access. A short term memory node can even cause edges to pop up between two other nodes around it.

Claim: There is no obvious felt/perceived experice that accompanies the creation of an edge, only to the traversal of an edge.

Implication: If I observed mentally hopping from A to B to C, I could see and admit that B was responsible for getting to C. But if the presence of B in my working memory creates an edge directly form A to B, it "feels like" I jump from A to C, and that B doesn't have anything to do with it.

Replies from: Hazard
comment by Hazard · 2018-08-05T13:22:10.460Z · LW(p) · GW(p)

This seems to be in accord with things like how the framing of questions has a huge effect on what people's answers are. There are probably some domains where you don't actually have much of a persistent model, and your "model" mostly consists of the temporary connections created by the contents of your working memory.

comment by Hazard · 2018-07-18T18:39:08.113Z · LW(p) · GW(p)

Utility functions aren't composable! Utility functions aren't composable! Sorry to shout, I've just realized a very specific way I've been wrong for quite some time.

VNM utility is completely ignores that structure and of outcomes and "similarities" between outcomes. U(1 apple) doesn't need to have any relation to U(2 apples). With decision scenarios I'm used to interacting in, there are often ways in which it is natural to things of outcomes as compositions or transformation of other outcomes or objects. When I think of outcomes, they can be more or less similar to each other, even if I'm not talking about value. From facing a lot of scenarios like this, it's easy to think it terms of "Find some way to value the smaller set of outcomes that can compose to make all outcomes", which makes it easy to expect such composability to be a property of of VNM utility works. But it's not! It really really isn't.

I've recently been reading about ordinal numbers, and getting familiar with the idea that you can have things that have order, but no notion of distance. I had that in the back of my mind when going through the wikipedia page for VNM utility, and I think that's what made it click.

Replies from: habryka4
comment by habryka (habryka4) · 2018-07-18T18:53:29.485Z · LW(p) · GW(p)

Yes, indeed quite important. This is a common confusion that has often lead me down weird conversational paths. I think some microeconomics has most made this clear to me, because in there you seem to be constantly throwing tons of affine transformations at your utility functions to make them convenient and get you analytic solutions, and it becomes clear very quickly that you are not preserving the relative magnitude of your original utility function.

Replies from: Hazard
comment by Hazard · 2018-07-19T01:52:36.942Z · LW(p) · GW(p)

I think one of the reasons it took me so long to notice was that I was introduced to VNM utility I'm the context of game theory, and winning at card games. Most of those problems do have the property of the utility of some base scoring system composing well to generate the utility of various end games. Since that was always the case, I guess I thought that it was a property of utility, and not the games.

comment by Hazard · 2018-02-09T21:09:16.961Z · LW(p) · GW(p)

I pointed out in this post that explanations can be confusing because you lack some assumed knowledge, or because the piece of info that will make the explanation click has yet to be presented (assuming a good/correct explanation to begin with). It seems like there can be a similar breakdown when facing confsion in the process of trying to solve a problem.

I was working on some puzzles in assembly code, and I made the mistake of interpreting hex numbers as decimal (treating 0x30 as 30 instead of 48). This lead me to draw a memory map that looked really weird and confusing. There also happened to be a bunch of nested functions that would operate on this block of memory. I definately noticed my confusion, but I think I implicitly predicted that my confusing memory diagram would make sense in light of investigatin the functions more.

In this particular example, that was the wrong prediction to make. I'm curious if I would have made the same prediciton if I had been making it explicitly. This seems to point at a general situation one could find themselves in when noticing confusion. "Did I screw something up earlier, or do I just not have enough info for this to make sense?"

Again, in my assembly example, I might have benefited from examining my confusion. I could have noticed thta the memory diagram I was drawing didnt just not immediately make sense, but that it also violated most rules of "how to not ruin you computer through bad code".

comment by Hazard · 2021-07-25T14:20:40.253Z · LW(p) · GW(p)

I'm reflecting back on this sequence [? · GW] I started two years ago. There's some good stuff in it. I recently made a comic strip that has more of my up to date thoughts on language here. Who knows, maybe I'll come back and synthesize things.

comment by Hazard · 2020-12-16T15:31:40.685Z · LW(p) · GW(p)

The way I see "Politics is the Mind Killer" get used, it feels like the natural extension is "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own Is The Mind Killer".

From this angle, a commitment to prevent things from getting "too political" to "avoid everyone becoming angry idiots" is also a commitment to not having an impact.

I really like how jessica re-frames things in this [LW(p) · GW(p)] comment. The whole comment is interesting, here's a snippet:

Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then "politics is the mind-killer" is the wrong framing. Rather, "politics is a domain where people often try to kill each other's minds" is closer.

With would further transform my new no longer catchy phrase to "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own will result in people trying to kill each other's minds."

Which has very different repercussions from the original saying.

Replies from: Dagon
comment by Dagon · 2020-12-19T06:46:34.962Z · LW(p) · GW(p)

The original post [LW · GW] was mostly about not UNNECESSARILY introducing politics or using it as examples, when your main topic wasn't about politics in the first place.  They are bad topics to study rationality on.  

They are good topics to USE rationality on, both to dissolve questions and to understand your communication goals.  

They are ... varied and nuanced in applicability ... topics to discuss on LessWrong.  Generally, there are better forums to use when politics is the main point and rationality is a tool for those goals.  And generally, there are better topics to choose when rationality is the point and politics is just one application.  But some aspects hit the intersection just right, and LW is a fine place.  

comment by Hazard · 2020-01-13T02:26:09.987Z · LW(p) · GW(p)

So a thing Galois theory does is explain:

Why is there no formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)?

Which makes me wonder; would there be a formula if you used more machinery that normal stuff and radicals? What does "more than radicals" look like?

Replies from: AprilSR, paragonal
comment by AprilSR · 2020-01-13T11:43:22.339Z · LW(p) · GW(p)

I think people usually just use “the number is the root of this polynomial” in and of itself to describe them, which is indeed more than radicals. There probably are more round about ways to do it, though.

comment by Hazard · 2019-11-24T18:15:55.697Z · LW(p) · GW(p)

There are two times when Occam's razor comes to mind. One is for addressing "crazy" ideas ala "The witch down the road did it" and one is for picking which legit seeming hypothesis might I prioritize in some scientific context.

For the first one, I really like Eliezer's reminder that when going with "The witch did it" you have to include the observed data in your explanation.

For the second one, I've been thinking about the simplicity formulation that one of my professors uses. Roughly, A is simpler than B if all data that is consistent with A is a subset of all data that is consistent with B.

His motivation for using this notion has to do with minimizing the number of times you are forced to update.

Replies from: Lanrian
comment by Lukas Finnveden (Lanrian) · 2019-11-24T19:39:37.532Z · LW(p) · GW(p)

Roughly, A is simpler than B if all data that is consistent with A is a subset of all data that is consistent with B.

Maybe the less rough version is better, but this seems like a really bad formulation. Consider (a) an exact enumeration of every event that ever happened, making no prediction of the future, vs (b) the true laws of physics and the true initial conditions, correctly predicting every event that ever happened and every event that will happen.

Intuitively, (b) is simpler to specify, and we definitely want to assign (b) a higher prior probability. But according to this formulation, (a) is simpler, since all future events are consistent with (a), while almost none are consistent with (b). Since both theories have equally much evidence, we'd be forced to assign higher probability to (a).

Replies from: Hazard
comment by Hazard · 2019-11-25T05:06:55.690Z · LW(p) · GW(p)

I think me adding more details will clear things up.

The setup presupposes a certain amount of realism. Start with Possible Worlds Semantics, where logical propositions are attached to / refer to the set of possible worlds in which they are true. A hypothesis is some proposition. We think of data as getting some proposition (in practice this is shaped by the methods/tools you have to look at and measure the world), which narrows down the allowable possible worlds consistent with the data.

Now is the part that I think addresses what you were getting at. I don't think there's a direct analog in my setup to your (a). You could consider the hypothesis/proposition, "the set of all worlds compatible with the data I have right now", but that's not quite the same. I have more thoughts, but first, do you still want feel like you idea is relevant to the setup I've described?


Replies from: Lanrian
comment by Lukas Finnveden (Lanrian) · 2019-11-25T07:59:03.106Z · LW(p) · GW(p)

That does seem to change things... Although I'm confused about what simplicity is supposed to refer to, now.

In a pure bayesian version of this setup, I think you'd want some simplicity prior over the worlds, and then discard inconsistent worlds and renormalize every time you encounter new data. But you're not speaking about simplicity of worlds, you're speaking about simplicity of propositions, right?

Since a propositions is just a set of worlds, I guess you're speaking about the combined simplicity of all the worlds. And it makes sense that that would increase if the proposition is consistent with more worlds, since any of the worlds would indeed lead to the proposition being true.

So now I'm at "The simplicity of a proposition is proportional to the prior-weighted number of worlds that it's consistent with". That's starting to sound closer, but you seem to be saying that "The simplicity of a proposition is proportional to the number of other propositions that it's consistent with"? I don't understand that yet.

(Also, in my formulation we need some other kind of simplicity for the simplicity prior.)

Replies from: Hazard
comment by Hazard · 2019-11-25T21:19:17.414Z · LW(p) · GW(p)

I'm currently turning my notes from this class into some posts, and I'll wait to continue this until I'm able to get those up. Then, hopefully, it will be easier to see if this notion of simplicity is lacking. I'll let you know when that's done.

comment by Hazard · 2019-11-24T01:01:47.174Z · LW(p) · GW(p)

"Contradictions aren't bad because they make you explode and conclude everything, they're bad because they don't tell you what to do next."

Quote from a professor of mine who makes formalisms for philosophy of science stuff.

Replies from: Pattern
comment by Pattern · 2019-11-25T20:54:26.045Z · LW(p) · GW(p)

Contradictions tell you to fix the contradiction/s next.

comment by Hazard · 2019-08-17T23:44:45.755Z · LW(p) · GW(p)

Looking at my calendar over the last 8 months, it looks like my attention span for a project is about 1-1.5 weeks. I'm musing on what it would like to lean into that. Have multiple projects at once? Work extra hard to ensure I hit save points before the weekends? Only work on things in week long bursts?

Replies from: Hazard, Hazard, Raemon
comment by Hazard · 2020-01-12T18:31:03.346Z · LW(p) · GW(p)

I'm noticing an even more granular version of this. Things that I might do casually (reading some blog posts) have a significant effect on what's loaded into my mind the next day. Smaller than the week level, I'm noticing a 2-3 day cycle of "the thing that was most recently in my head" and how it effects the question of "If I could work on anything rn what would it be?"

This week on Tuesday I picked Wednesday as the day I was going to write a sketch. But because of something I was thinking before going to bed, on Wednesday my head was filled with thoughts on urbex. So I switched gears, and urbex thoughts ran their course through Wednesday, and on Thursday I was ready to actually write a sketch (comedy thoughts need to be loaded for that)

comment by Hazard · 2019-08-18T01:20:15.695Z · LW(p) · GW(p)

Possible hack related to small wins. Many of the projects that I stopped got stopped part way through "continuing more of the same". One was writing my Hazardous Guide to Words [? · GW], and the other was researching how the internet works [LW · GW]. Maybe I could work on one cohesive thing for longer if there was a significant victory and gear shift after a work. Like, if I was making a video game, "Yay, I finished making all the art assets, onto actual code" or something.

Replies from: Raemon
comment by Raemon · 2019-08-18T03:45:43.198Z · LW(p) · GW(p)

The target audience for Hazardous Guide is friends of yours, correct? (vaguely recall that)

A thing that normally works for writing is that after each chunk, I get to publish a thing and get comments. One thing about Hazardous Guide is that it mostly isn't new material for LW veterans, so I could see it getting less feedback than average. Might be able to address by actually showing parts to friends if you haven't

Replies from: Hazard
comment by Hazard · 2019-08-18T16:02:35.996Z · LW(p) · GW(p)

Ooo, good point. I was getting a lot less feedback form than then from other things. There's one piece of feedback which is "am I on the right track?" and another that's just "yay, people are engaging!" both of which seem relevant to motivation.

comment by Raemon · 2019-08-18T00:09:00.757Z · LW(p) · GW(p)

If you can be deliberate about learning from projects, this could actually be a good setup – doing one project a week, learning what you can from it, and moving on actually seems pretty good if you're optimizing for skill growth.

Replies from: Hazard
comment by Hazard · 2019-08-18T01:23:38.706Z · LW(p) · GW(p)

Yeah, being explicit about 1 week would likely help. The projects that made me make this observation were all ones where I was trying to do more than a weeks worth of stuff, and a week is were I decided to move to something else.

I expect "I have a week to learn about X" would both take into account waning/waxing interest, and add a bit of rush-motivation.

comment by Hazard · 2019-07-29T15:23:00.698Z · LW(p) · GW(p)

Elephant in the Brain style model of signaling:

Actually showing that you have XYZ skill/trait is the most beneficial thing you can do, because others can verify you've got the goods and will hire your / like you / be on your team. So now there's an incentive for everyone to be constantly displaying there skills/traits. This takes up a lot of time and energy, and I'm gonna guess that anti-competition norms created "showing off" as a bad thing to do to prevent this "over saturation".

So if there's an "no showing-off" norm, what can you do? You signal (do non direct things to try and convey you have a skill or trait). It's still often that people signal all the time and it takes up time and energy, but it does seem a bit less wasteful than everyone "showing off" all the time.

Replies from: Ruby, Dagon
comment by Ruby · 2019-07-29T17:30:38.268Z · LW(p) · GW(p)

This has been my model too, deriving from EitB. But it's probably not just about preventing the over-saturation, it's also to the benefit of those who are more skilled at signaling covertly to promote a norm that disadvantages who only have skills, but not the covert-signaling skills.

Replies from: Hazard
comment by Hazard · 2019-07-30T02:03:11.117Z · LW(p) · GW(p)

Yeah, I see those playing together in the form of the base norm being about anti-competition, and then people can't want to enforce the norm from general "I'll get punished if I don't support it" and "I personally can skillfully subvert, so enforcing this norm helps me keep the unskilled out".

comment by Dagon · 2019-07-29T16:09:00.633Z · LW(p) · GW(p)

Be careful not to oversimplify - norms are complex, mutable, and context-sensitive. "no showing off" is not a very complete description of anyone's expectations. No showing off badly is closer, but "badly" is doing a LOT of work - in itself is a complex and somewhat recursive norm.

Finding out where "showing" skills is aligned with "excercising" those skills to achieve an outcome is non-trivial, but ever so wonderful if you do find a profession and project where it's possible.

See also https://en.wikipedia.org/wiki/Countersignaling , the idea where if you're confident that you're assumed to have some skills, you actually show HIGHER skills by failing to signal those skills.

Replies from: Hazard
comment by Hazard · 2019-07-30T02:05:00.350Z · LW(p) · GW(p)

Thanks on reminding me of nuance. Yeah, the "badly" does a lot of work, but also puts me in the right head space to guess at when I do and don't think real people would get annoyed at someone "showing off".

comment by Hazard · 2019-07-25T17:54:26.252Z · LW(p) · GW(p)

When I first read The Sequences, why did I never think to seriously examine if I was wrong/biased/partially-incomplete in my understanding of these new ideas?

Hyp: I believed that fooling one's self was all identity driven. You want to be a type of person, and your bias lets you comfortably sink into it. I was unable to see my identity. I also had a self narrative of "Yeah, this Eliezer dude, what ever, I'll just see if he has anything good to say. I don't need to fit in with the rationalists."

I saw myself as "just" taking in and thinking about some arguments, and these arguments were convincing to me, and so they stuck and I took them in. I didn't apply lots of rigor or self reflection, because I didn't think I needed careful thought to avoid being biased if I couldn't see a clear and obvious identity on the line.

(spoiler, identity was on the line, and also your reasoning can be flawed for a bazillion non identity based reasons)

comment by Hazard · 2019-07-25T16:59:42.062Z · LW(p) · GW(p)

Legibility. Seeing like a state. Reason isn't magic [LW · GW]. The secret of our success.

There is chaos and one (or a state) is trying to predict and act on the world. It sure would be easier if things were simpler. So far, this seems like a pretty human/standard desire.

I think the core move of legibility is to declare that everything must be simple and easy to understand, and if reality (i.e people) aren't as simple as our planned simplification, well too bad for people.

As a rationalist/post-rationalist/person who thinks good, you don't have to do that. Giving into the the process of legibility is giving into accepting a theory for the sake of a theory, even if muffles a part of reality that matters. Don't do that.

comment by Hazard · 2019-07-21T14:16:49.770Z · LW(p) · GW(p)

"If we're all so good at fooling ourselves, why aren't we all happy?"

The zealot is only "fooling themselves" from the perspective of the "rational" outsider. The zealot has not fooled themselves. They have looked at the world and their reasoning processes have come to the clear and obvious conclusion that []. They have gri-gri, and it works.

But it seems like most of us are much better at fooling ourselves than we are at "happening to use the full capacity of our minds to come to false and useful conclusions". We have belief in belief. It's possible to work this into almost as strong of a fortress as the zealot, but it is more precarious.


comment by Hazard · 2019-07-19T17:33:00.880Z · LW(p) · GW(p)

(tid bit from some recent deep self examination I've been doing)

I incurred judgment-fueled "motivational debt" by aggressively buying into the idea "Talk is worthless, the only thing that matters is going out and getting results" at a time where I was so confident I never expected to fail. It felt like I was getting free motivation, because I saw no consequences to making this value judgment about "not getting results".

When I learned more, the possibility of failure became more real, and that cannon of judgement I'd built swiveled around to point at me. Oops.


Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-19T22:55:25.216Z · LW(p) · GW(p)

This seems to be a specific instance of a more general phenomena that Leverage Research calls "De-simplification"

The basic phenomena goes like this:

1. According to leverage research, your belief structure must always be such that you believe you can achieve your terminal values/goals.

2. When you're relatively powerless and unskilled, this means that by necessity you have to believe that the world is more simple than it is and things are easier to do than they are, because otherwise there'd be no way you could achieve your goals/values.

3. As you gain more skill and power, your ability to tackle complex and hard problems become greater, so you can begin to see more complex and difficulty in the world and the problems you're trying to solve.

4. If you don't know about this phenomena, it might feel like power and skills don't actually help you, and you're just treading water. In the worst case, you might think that power and ability actually make things worse. In fact, what's going on is that your new power and ability made salient things that were always there, but which you could not allow yourself to see. Being able to see things as harder or more complex as actually a signal that you've leveled up.

Replies from: Hazard
comment by Hazard · 2019-07-20T00:38:58.722Z · LW(p) · GW(p)

This is a very useful frame! Is the blog on Leverage Research's cite where most of there stuff is, or would I go somewhere else if I wanted to read about what they've been up to?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-20T00:43:37.007Z · LW(p) · GW(p)

There's not really anywhere to go to read what leverage has been up to, they're a very private organization. They did have an arm called paradigm academy that did teaching, which is where I learned this. However leverage recently downsized, and I'm not sure about the status of Paradigm or other splinter organizations.

comment by Hazard · 2019-07-10T14:16:07.985Z · LW(p) · GW(p)

I've spent the last three weeks making some simple apps to solve small problems I encounter, and practice the development cycle. Example.

I've already been sold on the concept of developing things in a Lean MVP style for products. Shorter feedback loops between making stuff and figuring out if anyone wants it. Less time spent making things people don't want to give you money for. It was only these past few weeks where I noticed the importance of a MVP approach for personal projects. Now it's a case of shortening the feedback loops between making stuff and figuring out if I care about what I've made. This is crucial for motivation. It's easy for me to go, "I'm gonna make a slick app!" and try to do the symbolic thing that is app development, and not spend time working towards the what I cared about that made me start the project.

I see this a lot with blog posts as well. If I get in my head that this post should be "definitive" or "extra-well researched", I can spend a lot of time on that, even though I didn't actually care about it that much, and by the time I get to writing the thing that was in my heart I'm sick and tired of the idea and don't want to write.

comment by Hazard · 2019-07-07T17:41:59.336Z · LW(p) · GW(p)

I love attention, but I HATE asking for it. I've noticed this a few times before in various forms. This time it really clicked. What changed [LW(p) · GW(p)]?

  • This time around, the insight came in the context of performing magic. This made the "I love attention" part more obvious than other times, when I merely noticed, "I have an allergic reaction to seeming needy."
  • I was able to remember some of the context that this pattern arose from, and can observe "Yes, this may have helped me back then, but here are ways it isn't as helpful now, and it's not automatically terrible to ask for attention."
Replies from: Raemon
comment by Raemon · 2019-07-07T20:20:46.184Z · LW(p) · GW(p)

I realize this is my fault, but when I click "what changed" I'm not actually sure what comment it's linking to. (I'll improve the comment-linking UI this week hopefully so it's more clear which comments link where). Which comment did you mean to be linking to?

I'm interested in more details about what was going on in the particular example here (i.e. performing magic as in stage-magic? What made that different?)

Replies from: Hazard, Hazard
comment by Hazard · 2019-07-10T16:07:31.700Z · LW(p) · GW(p)

http://www.jhazard.com/posts/magic_is_dead.html

This is less about the noticing and more about effects of the previous frame.

Replies from: Raemon
comment by Raemon · 2019-07-10T18:59:12.350Z · LW(p) · GW(p)

I like this post, and think it'd be fine to crosspost to LW.

comment by Hazard · 2019-07-07T22:14:07.271Z · LW(p) · GW(p)

I'll be writing a post about this later. The comment it links to is the first child comment of the tippy top comment of this page. (yes, magic the performance art)

comment by Hazard · 2018-06-15T03:20:35.601Z · LW(p) · GW(p)

One of the more useful rat-techniques I've enjoyed has been the reframing of "Making a decision right here right now" to "Making this sort of decision in these sorts of scenarios". When considering how to judge a belief based on some arguments, the question becomes, "Am I willing to accept this sort of conclusion based on this sort of argument in similar scenarios?"

From that, if you accept claim-argument pair A "Dude, if electric forks where a good idea, someone would have done it by now", but not claim-argument pair B "Dude, if curing cancer was a good idea, someone would have done it by now", then it was never the A's argument that made you believe the claim. You have some other unmentioned reasons, and those should be what's addressed.

Replies from: Hazard
comment by Hazard · 2018-08-10T15:13:50.660Z · LW(p) · GW(p)

Similarly is the re-framing, "what is the actual decision I am making?" One friend was telling me, "This linear algebra class is a waste of my time, I'd get more by skipping lecture and reading the book." When I asked him if he actually thought he'd read the book if he didn't go to lecture, he said probably not. Here, it felt like the choice was, "Go to lecture, or not?" but it would be better framed as, "Given I'm trying to learn linear algebra, but feasible paths do I have for learning it?" If you don't actually expect to be able to self-study, then you no longer can think of "just not going to lecture" as an option.

comment by Hazard · 2018-02-25T16:12:11.611Z · LW(p) · GW(p)

There are a few instances where I've had "re-have an idea" 3 times, each in a slightly different form, before it stuck and affected me in any significant way. I noticed this when going through some old notebooks and seeing stub-thoughts of ideas that I was currently flushing out (and had been unaware that I had given this thing thought before). One example is with TAPS. Two winters ago I was writing about an idea I called "micro habits/attitudes" and they felt super important, but nothing ever came of them. Now I see that basically I was reaching at something like TAPs.

It seems like it would be useful to have a mental move along the lines of "Tag this idea/concept/topic as likely to be hiding something useful even if I don't know what"

Replies from: Hazard
comment by Hazard · 2018-08-10T15:09:31.125Z · LW(p) · GW(p)

I recently was going through the past 3 years of notebooks, and this pattern is incredibly persistent.

comment by Hazard · 2019-12-16T20:26:39.841Z · LW(p) · GW(p)

So Kolmogorov Complexity depends on the language, but the difference between complexity in any two languages differs by at most a constant (what ever the size of an interpreter from one to the other is).

This seems to mean that the complexity ordering of different hypothesis can be rearranged by switching languages, but "only so much". So

and

are both totally possible, as long as

I see how if you care about orders of magnitude, the description language probably doesn't matter. But if you ever had to make a decision where it mattered if the complexity was 1,000,000 vs 1,000,001 then language does matter.

Where is KC actually used, and in those contexts how sensitive are results to small reordering like the one I presented?

Replies from: Viliam
comment by Viliam · 2019-12-16T21:04:01.456Z · LW(p) · GW(p)

I am not an expert, but my guess is that KC is only used in abstract proofs, where these details do not matter. Things like:

  • KC in not computable
  • there is a constant "c" such that KC of any message is smaller than its length plus c

Etc.

Replies from: Hazard
comment by Hazard · 2019-12-16T22:14:44.092Z · LW(p) · GW(p)

Yeah. I guess the only place I can remember seeing it referenced in actions was with regard to assigning priors for solomonoff induction. So I wonder if it changes anything there (though solomonoff is already pretty abstracted away from other things, so it might not make sense to do a sensitivity analysis)

comment by Hazard · 2019-10-31T00:27:54.247Z · LW(p) · GW(p)

Mini Post, Litany of Gendlin related.

Changing your mind feels like changing the world. If I change my mind and now think the world is a shittier place than I used to (all my friends do hate me), it feels like I just teleported into a shittier world. If I change my mind and now think the world is a better place than I used to (I didn't leave the oven on at home, so my house isn't going to burn down!) it feels like I've just been teleported into a better world.

Consequence of the above: if someone is trying to change your mind, it feels like they are trying to change your world. If someone is trying to make you believe the world is a shittier place than you thought, it feels like they are trying to make your life shittier.

Now I recite the Litany of Gendlin like a good rationalist. Now let me try to walk through why that might be uncompelling to the average Joe.

Let's say all of your friends have secretly hated you for a while. Something has just happened (you saw one of their group chats were they were shit talking you) and you are considering "Shit, what if they have been hating me for years?" You recite the Litany of Gendlin. It's ineffective. What's up?

It seems it has to be that your concept of "All my friends secretly hate me" is not in accord with what your friends actually hating you is like. You have already endured your friends secretly hating you. You have not yet endured believing "My friends secretly hate me". This can only do damage by interacting with other belief networks in your mind. Maybe having this belief triggers "Only an idiot could go years without his noticing his friends hate him" which combines with "If I'm an oblivious idiot I won't be able to accomplish my goals" and "No one will love an oblivious idiot who can't accomplish their goals" and now the future does not feel safe.

it seems like the move you could pull that might best reduce the feeling of "Believing this will make life shittier than not" is to imagine believing it and the world being shitty, and then to imagine not believing it, but the world still being shitty. I think this will help in many scenarios. I'd expect many Litany of Gendlin scenarios to be one's were ignoring the truth will create compounding trouble down the road. So the move is to imagine going along blissfully in denial, and then getting socked in the face by a crashing build up. Compare that to the extra work and worry of believing now.

If you did that and came out with "Nope, it still seems like I'll be net better of to not believe", well shit, what was the scenario? I'm genuinely interested, and don't have immediate thoughts on whether or not you should change your mind.

(Looking for feedback on how useful you think this explanation and extra advice would be to a non rat going through a Gendlin style crisis)


Replies from: Pattern
comment by Pattern · 2019-11-10T14:46:33.541Z · LW(p) · GW(p)
Looking for feedback on how useful you think this explanation

The nature of this experience may vary between people. I'd say finding out something bad and having to deal with the impact of that is more common/of an issue than rejecting the way things are (or might be), though:

extra advice would be to a non rat going through a Gendlin style crisis)

Offhandedly I'm not sure "rat" makes an effect here?

1. Figuring out what to do with new troubling information - making a plan and acting on it - can be hard. (Knowing what to do might help people with "accepting" their "new" reality?)

2. Just because you understand part of an issue doesn't mean you've wrapped your head around all the implications.

3. Realizing something "bad" can take a while. Processing might not happen all at once.

4. If it's taking you take a long time to work something out, you might already know what the answer is, and be afraid of it.

5. This gets into an area where things vary depending on the person (and the situation) - sometimes people may have more trouble accepting "new negative realities", sometimes people are too fast to jump to negative conclusions.

comment by Hazard · 2019-07-21T15:44:48.768Z · LW(p) · GW(p)

Collecting some recent observations from some self study:

Replies from: Hazard, Hazard
comment by Hazard · 2019-07-21T15:54:46.614Z · LW(p) · GW(p)

In my freshman fall of university, I realized I was incredibly judgmental of myself and felt I should be capable of everything. I "dealt with it" and felt less suffering and self-loathing/judgment in the following months year. I more or less thought I had "learned how to stop being so harsh on myself."

Now I see that I never reduced the harshness. What I did was convince my fear/judgement/loathing to use a new rubric for grading me. I did a huge systems, successfully started a shit ton of habits, and build a much better ability to focus. It was as if to say "See? Look at this awesome plan I have! Yes, I implicitly buy into the universe where it's imperative I do [all the shit]. All I ask is that you give me time. This plan is great and I'll totally be able to do [all the stuff], just not right now."

I was fused with the judgement enough that I wasn't able to question it, only negotiate with it for better terms. The penalty for failure was still "feel like a miserable piece of shit".

I now have a much better sense of what lead to this fear and judgement being built up in the first place, and that understanding has lead to not doing [all the stuff] feel more like "a less cool world than others" and not "hell, complete with eternal torment and self-loathing"

comment by Hazard · 2019-07-21T15:45:26.482Z · LW(p) · GW(p)

This [LW(p) · GW(p)] comment

comment by Hazard · 2019-07-21T14:40:24.376Z · LW(p) · GW(p)

Something I noticed about what I take certain internal events to mean:

Over the past 4 years I've had trouble being in touch with "what I want". I've made a lot of progress in the past year (a huge part was noticing that I'd previously intentionally cut off communication with the parts of me that want).

Previously when I'd ask "what do I want right now?" I was basically asking, "What would be the most edifying to my self-concept that is also doable right now?"

So I've managed to stop doing that a lot. Last week, I noticed that "what do I want to do right now?" or "do I want to do X right now?" turns into "am I immediately able to think of interesting parts of X? Are parts of X already loaded into my mind and my brain is working on it?"

Noticing this is super helpful. Basically I was asking "am I already working on X in my head?" and then deciding to work on it explicitly. Consequences of this: If what I was working on in the morning wasn't met with hard road blocks, I'd feel that I'd want to just do that thing for the whole day, and that switching would be "betraying my wants". If I did hit a road block, or my mind was just DONE with the first task of the day, then I could switch.

On the opposite side, if I thought of an activity, and it didn't immediately boot up the relevant and interesting parts, then I'd take that as "I don't want to do this" or "Oh, I guess that feels boring right now."

Now I can work on better predicting "If I did start doing this, how much would I like it?" and I don't have to implicitly rely only on "Am I already working on it?"

comment by Hazard · 2019-07-14T13:40:27.653Z · LW(p) · GW(p)

Being undivided is cool. People who seem to act as one monolithic agent [? · GW] are inspiring. They get stuff done.

What can you do to try and be undivided if you don't know any of the mental and emotional moves that go into this sort of integration? You can tell everyone you know, "I'm this sort of person!" and try super super hard to never let that identity falter, and feel like a shitty miserable failure whenever it does.

How funny that I can feel like I shouldn't be having the "problem" of "feeling like I shouldn't be having XYZ problems". Ha.


Replies from: Kaj_Sotala, Hazard
comment by Kaj_Sotala · 2019-09-30T10:11:01.265Z · LW(p) · GW(p)
You can tell everyone you know, "I'm this sort of person!" and try super super hard to never let that identity falter, and feel like a shitty miserable failure whenever it does.

You could also just avoid the feelings of miserable failure by reclassifying all of your failures as not-failures and then forgetting about them. :-)

comment by Hazard · 2019-07-14T14:01:18.984Z · LW(p) · GW(p)

More Malcolm Ocean:

"So the aim isn’t to be productive all the time. It’s to be productive at the times when your internal society of mind generally agrees it would be good to be productive. It’s not to be able to motivate yourself to do anything. It’s to be able to motivate yourself to do anything it makes sense to do."

I notice some of my older implicit and explicit strategies were, "Well first I'll get good at being able to do any arbitrary thing that I (i.e the dominant self-concept/identify I want to project) pick, and then I'll work on figuring out what I actually want and care about."

Oops.

Also, noting that the "then I'll figure out what I want" was more "Well I've got no idea how to figure out what I want, so let's do anything else!"

Oops.

comment by Hazard · 2019-04-01T01:35:19.021Z · LW(p) · GW(p)

Reasons why I currently track or have tracked various metrics in my life:

1. A mindfulness tool. Tacking the time to record and note some metric is itself the goal.

2. Have data to be able to test an hypothesis about ways some intervention would affect my life. (i.e Did waking up earlier give me less energy in the day?)

3. Have data that enables me to make better predictions about the future (mostly related to time tracking, "how long does X amount of work take?")

4. Understanding how [THE PAST] was different of [THE PRESENT] to help defeat the Deadly Demons of Doubt and Shitty Serpents of Should (ala Deliberate Once).

I have not always had these in mind when deciding to track a metric. Often I tracked because "that's wut productive people do right?". When I keep these in mind, tracking gets more useful.

comment by Hazard · 2019-03-12T17:57:35.669Z · LW(p) · GW(p)

Current beliefs about how human value works: various thoughts and actions can produce a "reward" signal in the brain. I also have lots of predictive circuits that fire when they anticipate a "reward" signal is coming as a result of what just happened. The predictive circuits have been trained to use the patterns of my environment to predict when the "reward" signal is coming.

Getting an "actual reward" and a predictive circuit firing will both be experienced as something "good". Because of this, predictive circuits can not only track "actual reward" but also the activation of other predictive circuits. (So far this is basically "there's terminal and instrumental values, and they are experienced as roughly the same thing")

The predictive circuits are all doing some "learning process" to keep their firing correlated to what they're tracking. However, the "quality" of this learning can vary drastically. Some circuits are more "hardwired" than others, and less able to update when they begin to become uncorrelated from what they are tracking. Some are caught in interesting feedback loops with other circuits, such that you have to update multiple circuits simultaneously, or in a particular order.

Thought every thing that feels "good" feels good because at some point or another it was tracking the base "reward" signal, it won't always be a good idea to think of the "reward" signal as the thing you value.

Say you have a circuit that tracks a proxy of your base "reward". If something happens in your brain such that this circuit ceases to update, you basically value this proxy terminally.

Said another way, I don't have a nice clean ontological line between terminal values and instrumental values. The less valuable a predictive circuit, the more "terminal" the value it represents.

Replies from: Hazard
comment by Hazard · 2019-03-12T18:12:00.491Z · LW(p) · GW(p)

Weirdness that comes from reflection:

In this frame, I can self-reflect on a given circuit and ask, "Does this circuit actually push me towards what I think is good?" When doing this, I'll be using some more meta/higher-order circuits (concepts I've built up over time about what a "good" brain looks like) but I'll also be using lower level circuits, and I might even end up using the evaluated circuit itself in this evaluation process.

Sometimes this reflection process will go smooth. Sometimes it won't. But one takeaway/claim is you have this complex roundabout process for re-evaluating your values when some circuits begin to think that other circuits have diverged from "good".

Because of this ability to reflect and change, it seems correct to say that "I value things conditional on my environment" (where environment has a lot of flex, it could be as small as your work space, or as broad as "any existing human culture").

Example. Let's say there was literally no scarcity for survival goods (food water etc). It seems like a HUGE chunk of my values and morals are built up inferences and solutions to resource allocation problems. If resource scarcity was magically no longer a problem, much of my values have lost their connection to reality. From what I've seen so far of my own self-reflection process, it seems likely that overtime I would come to reorganize my values in such a post-scarcity world. I've also currently got no clue what that reorganization would look like.

Replies from: Hazard
comment by Hazard · 2019-03-12T18:18:34.198Z · LW(p) · GW(p)

AFI worry: A human-in-the-loop AI that only takes actions that get human approval (and whose expected outcomes have human approval) hits big problems when the context the AI is acting in is a very different context from where our values were trained.

Is there any way around this besides simulating people having their values re-organized given the new environment? Is this what CEV is about?

comment by Hazard · 2018-12-23T14:41:20.595Z · LW(p) · GW(p)

The slogan version of some thoughts I've been having lately are in the vein of "Hurry is the root of all evil". Thinking in terms of code. I've been working in a new dev environment recently and have felt the siren song of, "Copy the code in the tutorial. Just import all the packages they tell you to. Don't sweat the details man, just go with it. Just get it running." All that as opposed to "Learn what the different abstractions are grounded in, figure out what tools do what, figure out exactly what I need, and use whatever is necessary to accomplish it."

When I ping myself about why the former feels to have tug, I come up with 1). a tiny fear of not being capable of understanding the fine details and 2). a tiny fear that if understanding is possible, it will take a lot of time and WE'RE RUNNING OUT OF TIME!

Which is interesting, because this is just a side project that I'm doing for fun over winter break, which is specifically designed to get me to learn more.

comment by Hazard · 2018-10-25T18:47:37.442Z · LW(p) · GW(p)

The fact that utility and probability can be transformed while maintaing the same decisions matches what the algo feels like from the inside. When thinking about actions, I often just feel like a potential action is "bad", and it takes effort to piece out if I don't think the outcome is super valuable, or if there's a good outcome that I don't think is likely.

Replies from: Hazard
comment by Hazard · 2018-11-01T00:07:26.021Z · LW(p) · GW(p)

Thinking about belief in belief [LW · GW].

You can have things called "beliefs" which are of type action. "Having" this belief is actually your decision to take certain actions in certain scenarios. You can also have things called "beliefs" which are of type probability, and are part your deep felt sense of what is and isn't likely/true.

A belief-action that has a high EV (and feels "good") will probably feel the same as a belief-probability that is close to 1.

Take a given sentence/proposition. You can put a high EV on the belief-action version of that sentence (mayhaps it has important consequences for your social groups) while putting a low probability on the belief-probability version of the sentence.

Meta Thoughts: The above idea is not fundamentally different from belief in belief [LW · GW] or crony beliefs, both of which I've read a year or more ago. What I just wrote felt like a genuine insight. What do I think I understand now that I don't think I understood then?

I think that recently (past two months, since CFAR) I've had better luck with going into "Super-truth" mode, looking into my own soul and asking, "Do you actually belief this?"

Now, I've got many more data points of, "Here's a thing that I totally thought that I believed(probability) but actually I believed(action)."

Maybe the insight is that it's easy to get mixed up between belief-prob and belief-action because the felt sense of probability and EV are very very similar, and genuinely non-trivial to peel apart.

^yeah, that feels like it. I think previously I thought, "Oh cool, now that I know that belief-action and belief-prob are different things, I just won't do belief-action". Now, I believe that you need to teach yourself to feel the difference between them, otherwise you will continue to mistake belief-actions for belief-probs.

Meta-Meta-Thought: The meta-thoughts was super useful to do, and I think I'll do it more often, given that I often have a sense of, "Hmmmm, isn't this basically [insert post in The Sequences here] re-phrased?"

comment by Hazard · 2018-10-03T19:31:43.444Z · LW(p) · GW(p)

Don't ask people for their motives if you are only asking so that you can shit on their motives. Normally when I see someone asking someone else, "Why did you do that?" I interpret the statement to come from a place of, "I'm already about to start making negative judgments about you, this is the last chance for you to offer a plausible excuse for your behavior before I start firing."

If this is in fact the dynamic, then no one is incentivised to give you their actual reasons for things.

Replies from: Elo
comment by Elo · 2018-10-03T20:14:49.034Z · LW(p) · GW(p)

I have been looking at intentions and trying to act with intentions in mind.

No one ever has ill intentions, they can have a "make the sale at your detriment" intention. But no one ever has a "worse off for everyone" intention.

Replies from: Hazard
comment by Hazard · 2018-10-04T01:10:23.624Z · LW(p) · GW(p)
make the sale at your detriment

I like that phrasing.

Yeah, a was speaking and (slightly) thinking about people with the pure motive to harm, which wouldn't be a typical case of this. Refrase with, "Don't blah blah blah if you will end up making explicit negative judgments at them," and you have a better version of my thought.

comment by Hazard · 2018-08-02T01:10:59.035Z · LW(p) · GW(p)

I'm looking at notebook from 3 years ago, and reading some scribbles from past me excitedly describing how they think they've pieced together that anger and the desire to punish are adaptations produced by evolution because they had good game theoretic properties. In the haste of the writing, and in the number of exclamation marks used, I can see that this was a huge realization for me. It's surprising how absolutely normal and "obvious" the idea is to me now. I can only remember a glimmer of the "holy shit!"ness that I felt at the time. It's so easy to forget that I haven't always thought the way I currently do. As if I'm typical-minding my past self.

comment by Hazard · 2018-05-31T18:31:08.272Z · LW(p) · GW(p)

An uncountable finite set is any finite set that contains the source code to a super intelligence that can provably prevent anyone from counting all of it's elements.

Replies from: Hazard
comment by Hazard · 2019-12-02T00:30:36.522Z · LW(p) · GW(p)

I still think this is genius.

comment by Hazard · 2018-04-15T14:53:20.483Z · LW(p) · GW(p)

In a fight between the CMU student body and the rationalist community, CMU would probably forget about the fight unless it was assigned for homework, and the rationalists would all individually come to the conclusion that it is most rational to retreat. No one would engage in combat, and everyone would win.

comment by Hazard · 2018-04-15T14:43:23.431Z · LW(p) · GW(p)

I notice a disparity between my ability to parse difficult texts when I'm just "reading for fun" versus when I'm trying to solve a particular problem for a homework assignment. It's often easier to do it for homework assignments. When I've got time that's just, "reading up on fun and interesting things," I bounce-off of difficult texts more often than I would like.

After examining some recent instances of this happening, I've realized that when I'm reading for fun, my implicit goal has often been, "read whatever will most quickly lead to a feeling of insight." When I'm reading for homework, I have a very explicit goal of, "understand of dynamic memory management works," or whatever the topic is. Upon reflection, I think that most of the time I'd be better served if I approached my fun-exploratory reading with with a goal of, "Find something that seems interesting, and then focus on trying to understand that in particular."

The useful TAP would be notice when I'm bouncing off a text, and check if my actual reasons for reading are aligned with my big picture reasons for reading, and readjust as necessary.

Replies from: Hazard
comment by Hazard · 2020-10-05T21:44:50.000Z · LW(p) · GW(p)

This flared up again recently. Besides "wanting insight" often I simply am searching for fluency. I want something that I can fluently engage with, and if there's an impediment to fluency, I bounce off. Wanting an experience of fluency is a very different goal from wanting to understand the thing. Rn I don't have too many domains where I have technical fluency. I'm betting if I had more of that, it would extend my patience/ability to slog through texts that are hard for me.

comment by Hazard · 2018-03-14T23:19:09.421Z · LW(p) · GW(p)

I've been working on some more emotional bugs lately, and I'm noticing that many of the core issues that I'm dragging up are ones I've noticed at various points in the past and then just... ? I somehow just managed to forget about them, though I remember that in round 1 it also took a good deal of introspection for these issues to rise to the top. Keeping a permanent list of core emotional bugs would be an easy fix. The list would need to be somewhere I look at least once a week. I don't always have to be working on all of them, but I at least need to not forget that these problems exist.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-15T00:47:22.971Z · LW(p) · GW(p)

Probably not an accident. Forgetfulness is one of the main tools your mind will use to get you to stop thinking about things. If you make a list you might end up flinching away from looking at the list.

Replies from: Hazard
comment by Hazard · 2018-03-15T16:29:35.513Z · LW(p) · GW(p)

Is that a prediction about how one's default "forget painful stuff" mechanisms work, or have you previously made a list and also ended up ignoring it? You've written elsewhere about conquering a lot of emotional bugs in the past year, and I'd be interested to know what you did to keep those bugs in mind and not forget about them.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-15T18:42:41.849Z · LW(p) · GW(p)

I have forgotten about important emotional bugs before, and have seen other people literally forget the topic of the conversation when it turns to a sufficiently thorny emotional bug.

The thing that usually happens to my lists is that they feel wrong and I have to regenerate them from scratch constantly; they're like Focusing labels that expire and aren't quite right anymore.

The past year I was dealing with what felt to me like approximately one very large bug (roughly an anxious-preoccupied attachment thing), so it was easy to remember.

comment by Hazard · 2018-03-09T16:13:32.955Z · LW(p) · GW(p)

"With a sufficiently negligent God, you should be able to hack the universe."

Just a fun little thought I had a while ago. The idea being that if your deity intervenes with the world, or if if there are prayers, miracles, "supernatural creatures" or anything of that sort, then with enough planning and chutzpah, you should be able to hack reality unless God has got a really close eye on you.

This partially came from a fiction premise I have yet to act on. Dave (garden variety atheist) wakes up in hell. Turns out that the Christian God TM is real, though a bit of a dunce. Dave and Satan team up and go on a wacky adventure to overthrow God.

comment by Hazard · 2018-02-25T15:36:51.538Z · LW(p) · GW(p)

Quick thoughts on TAPS:

The past few weeks I've been doing a lot of posture/physical tick based TAPs (not slouching, not biting lips etc). These seem to be very well fit to TAPs, because the trigger is a physical movement, making it easier to notice. I've noticed roughly three phases of noticing triggers

  1. I suddenly become aware of the fact I've been doing the action.
  2. I become aware of the fact that I've initiated the action.
  3. Before any physical movement happens, I notice the "impulse" to do the thing .

To make a TAP run deep, it seems the key is to train up the ladder and be able to deal with triggers and actions on the level where they first originate in the mind.

comment by Hazard · 2018-02-15T16:49:13.968Z · LW(p) · GW(p)

Here's a pattern I want to outline and possible suggestions on how to fix it.

Sometimes when I'm trying to find the source of the bug, I make incorrect updates. An explanation of what the problem might be pops to mind, and it seems to fit (ex. "oops, this machine is Big Endian, not Little Endian"). Then I work on the bug some more, things still don't work, and at some point I find the real problem. Today, when I found a bug I was hunting for, I had a moment of, "Oh shit, an hour ago I updated my beliefs about how this machine worked, but that was a bad update because the problem had nothing to do with Endiannes".

I imagine that there have been plenty of times when I've been programming/lifing and I haven't kept track of updates that were tied to a problem, and never corrected them when I found out what the actual solution was.

Mayhaps I should keep a running list of assumptions and updates I've made while I program, and everytime a bug is completely resolved, see how that effects past updates.

Replies from: Hazard
comment by Hazard · 2018-12-22T22:24:25.547Z · LW(p) · GW(p)

Had a similar style bug while programming today. I caught it much faster though I can't say if that can be attributed to previously identifying this pattern. But did think of the previous big as soon as I made the mental leap to figure out what was wrong this time.

comment by Hazard · 2021-03-07T23:42:06.559Z · LW(p) · GW(p)

Previously when I'd encountered the distinction between synthetic and analytic thought (as philosophers used them), I didn't quite get it. Yesterday I started reading Kant's Prolegomena and have a new appreciation for the idea. I used to imagine that "doing the analytic method" meant looking at definitions. 

I didn't imagine the idea actually being applied to concepts in one's head. I imagined the process being applied to a word. And it seemed clear to me that you're never going to gain much insight or wisdom from investigation a words definition and going to a dictionary. 

But the process of looking at some existing concept you have in your mind, that you already use and think with, and peeling it apart to see what you're actually doing, that's totally useful!

comment by Hazard · 2019-12-01T23:27:12.067Z · LW(p) · GW(p)

This comment will collect things that I think beginner rationalists, "naive" rationalists, or "old school" rationalists (these distinctions are in my head, I don't expect them to translate) do which don't help them.

Replies from: Hazard
comment by Hazard · 2019-12-01T23:41:59.626Z · LW(p) · GW(p)

You have an exciting idea about how people could do things differently. Or maybe you think of norms which if they became mainstream would drastically increase epistemic sanity. "If people weren't so sensitive and attached to their identities then they could receive feedback and handle disagreements, allowing us to more rapidly work towards the truth." (example picked because versions of this stance have been discussed on LW)

Sometimes the rationalist is thinking "I've got no idea how becoming more or less sensitive, gaining a thicker or thinner skin, or shedding or gaining identity works in humans. So I'm just going to black box this, tell people they should change, negatively reinforce them when they don't, and hope for the best." (ps I don't think everyone thinks this, though I know at least one person who does) (most relevant parts in italics)

Comments will be continued thoughts on this behavior.


Replies from: Hazard, Hazard, Hazard, Hazard, Hazard
comment by Hazard · 2019-12-02T00:25:43.544Z · LW(p) · GW(p)

When I see this behavior, I worry that the rationalist is setting themselves up to have a blindspot when it comes themselves being "overly sensitive" to feedback. I worry about this because it's happened to me. Not with reactions to feedback but with other things [LW(p) · GW(p)]. It's partially the failure mode of thinking that some state is beneath you, being upset and annoyed at others for being in that state, and this disdain making it hard to see when you engage in it.

K, I get that thinking a mistake is trivial doesn't automatically mean your doomed to secretly make it forever. Still, I worry.

comment by Hazard · 2019-12-01T23:59:56.010Z · LW(p) · GW(p)

The way this can feel to the person being told to change: "None of us care about how hard this is for your, nor the pain you might be feeling right now. Just change already, yeesh." (it can be true or false that the rationalist actually things this. I think I've seen some people playing the rationalist role in this story who explicitly endorsed communicating this sentiment)

Now, I understand that making someone feel emotionally supported takes various levels of effort. Sometimes it might seem like the effort required is not worse the loss in pursing the original rationality target. We could have lots of fruitful discussion about what would be fruitful norms for drawing that line. But I think another problematic thing that can happen, is that in the rationalists rush to get back on track to pursing the important target, they intentionally or unintentionally communicate. "You aren't really in pain. Or if you are, you shouldn't be in pain / you suck or are weak for feeling pain right now." Being told you aren't in pain SUCCCKS, especially when you're in pain. Being reprimanded being in pain SUCCCKS, especially when you're in pain.

Claim: Even if you've reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they're being gaslit about their pain or reprimanded for it.

Replies from: Pattern
comment by Pattern · 2019-12-02T00:49:33.071Z · LW(p) · GW(p)

Errata.

they intentionally

or [un]intentionally communicate:

[a] "You aren't really in pain. [b] Or if you are, you shouldn't be in pain / you suck or are weak for feeling pain right now." [a] Being told you aren't in pain SUCCCKS, especially when you're in pain.
Claim: Even if you've reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they're being [a'] gaslit about their pain.

The dialogue refers to two possibilities, A and B, but only A is referenced afterwards. (I wonder what the word for 'telling people their pain doesn't matter' is.)

Replies from: Hazard
comment by Hazard · 2019-12-02T01:37:59.278Z · LW(p) · GW(p)

Yeah, I only talked about A after. Is the parenthetical rhetorical? If not I'm missing the thing you want to say.

Replies from: Pattern
comment by Pattern · 2019-12-02T17:14:43.145Z · LW(p) · GW(p)

Non-rhetorical. The spelling suggestion suggests an improvement which largely unambiguous/style-agnostic. Suggesting adding a word requires choosing a word - a matter which is ambiguous/style dependent. Sometimes writing contains grammatical errors - but when people other than the author suggest fixes, the fixes don't have the same voice. This is why I included a prompt for what word you (Hazard) would use.

For clarity, I can make less vague comments in the future. What I wanted to say rephrased:

they intentionally or [un]intentionally communicate:
"You aren't really in pain. Or if you are, you shouldn't be in pain / you suck or are weak for feeling pain right now." Being told you aren't in pain SUCCCKS, especially when you're in pain.
Claim: Even if you've reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they're being gaslit about[/mocked for] their pain.

Here the [] serve one purpose - suggesting improvement, even when there's multiple choices.

Replies from: Hazard
comment by Hazard · 2019-12-02T19:08:56.167Z · LW(p) · GW(p)

Aaaah, I see now. Just edited to what I think fits.

comment by Hazard · 2019-12-01T23:49:38.150Z · LW(p) · GW(p)

If you really had no idea... fine, can't do much better than trying to operant conditioning a person towards the end goal. In my world, getting a deep understanding of how to change is the biggest goal/point of rationality (I've given myself away, I care about AI Alignment less than you do ;).

So trying to skip to the rousing debate and clash of ideas while just hoping everyone figures out how to handle it feels like leaving most of the work undone.

Replies from: Pattern
comment by Pattern · 2019-12-02T01:06:43.440Z · LW(p) · GW(p)

Meta note: Me upvoting the comment above could make things go out of order.

operant conditioning

It could also be seen as selection - get rid of the people who aren't X. This risks getting rid of people who might learn, which could be an issue if the goal of that place (whether it's LW, SSC, or etc.) includes learning.

An organization, consisting only of people who have a PhD might be an interesting place, perhaps enabling collaboration and cutting edge work that couldn't be done anywhere else. But without a place where people can get a Phd, eventually there will be no such organizations.

Replies from: Hazard
comment by Hazard · 2019-12-02T01:42:20.487Z · LW(p) · GW(p)

(Meta: the order wasn't important, thanks for thinking about that though)

The selection part is something else I was thinking about. One of my thoughts was your "If there's no way to train PhDs, they die out." And the other was me being a bit skeptical of how big the pool would be right this second if we adopted a really thick skin policy. Reflecting on that second point, I realize I'm drawing from my day to day distribution, and don't have thoughts about how thick skinned most LW people are or aren't.

comment by Hazard · 2019-12-02T19:03:44.530Z · LW(p) · GW(p)

Thought that is related to this general pattern, but not this example. Think of having an idea of an end skill that you're excited by (doing bayes updates irl, successfully implementing TAPs, being swayed by "solid logical arguments"). Also imagine not having a theory of change. I personally have sometimes not noticed that there is or could be an actual theory of how to move from A to B (often because I thought I should already be able to do that), and so would use the black box negative reinforcement strategy on myself.

Being in that place involved being stuck for a while and feeling bad about being stuck. Progress was only made when I managed to go "Oh. There are steps to get from A to B. I can't expect to already know them. I most focus on understanding this progression, and not on just punishing myself whenever I fail."

comment by Hazard · 2019-12-02T00:16:25.971Z · LW(p) · GW(p)

I've been thinking about this as a general pattern, and have specifically filled in "you should be thick skinned" to make it concrete. Here's a thought that applies to this concrete example that doesn't necessarily apply to the general pattern.

There's all sorts of reasons why someone might feel hurt, put-off, or upset about how someone gives them feedback or disagrees with them. One of these ways can be something like, "From past experience I've learned [LW · GW] that someone how uses XYZ language or ABC tone of voice is saying what they said to try and be mean to me, and they will probably try to hurt and bully me in the future."

If you are the rationalist in this situation, you're annoyed that someone thinks you're a bully. You aren't a bully! And it sure would suck if they convinced other people that you were a bully. So you tell them that, duh, you aren't trying to be mean that this is just how you talk, and that they should trust you.

If your the person being told to change, you start to get even more worried (after all, this is exactly what you piece of shit older brother would do to you), this person is telling to trust that they aren't a bully when you have no reason to, and you're worried they're going to turn the bystanders against you.

Hmmmm, after writing this out the problem seems much harder to deal with than I first thought.


comment by Hazard · 2019-11-12T02:24:16.675Z · LW(p) · GW(p)

Have some horrible jargon: I spit out a question or topic and ask you for your NeMRIT, your Next Most Relevant Interesting Take.

Either give your thoughts about the idea I presented as you understand it, unless that's boring, then give thoughts that interests you that seem conceptually closest to the idea I brought up.


Replies from: Pattern
comment by Pattern · 2019-12-02T01:12:35.751Z · LW(p) · GW(p)

MIST*, Most Interesting Similar Take?

*This is a backronym.

Replies from: Hazard
comment by Hazard · 2019-12-02T01:53:11.540Z · LW(p) · GW(p)

I like that because I can verb it while speaking.

"How much cattle could you fit in this lobby? You can answer directly or mist."

comment by Hazard · 2019-11-11T20:02:42.789Z · LW(p) · GW(p)

Kevin Zollman at CMU looks like he's done a decent amount of research on group epistemology. I plan to read the deets at some point, here's a link if anyone wanted to do it first and post something about it.

comment by Hazard · 2019-08-21T15:48:45.234Z · LW(p) · GW(p)

I often don't feel like I'm "doing that much", but find that when I list out all of the projects, activities, and thought streams going on, there's an amount that feels like "a lot". This has happened when reflecting on every semester in the past 2 years.

Hyp: Until I write down a list of everything I'm doing, I'm just probing my working memory for "how much stuff am I up to?" Working mem has a limit, and reliably I'm going to get only a handful of things. Anytime when I'm doing more things than what fit in working memory, when I stop to write them all down, I will experience "Huh, that's more than it feels like."

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-08-21T22:28:31.958Z · LW(p) · GW(p)

Relatedly, the KonMari cleaning method involves taking all items of a category "e.g. all books" and putting them in on big pile, before clearing them out. You often feel like you don't own "that much stuff" and are almost always surprised by the size of the pile.

comment by Hazard · 2019-05-05T13:26:04.280Z · LW(p) · GW(p)

Short framing on one reason it's often hard to resolve disagreements:

[with some frequency] disagreements don't come from the same place that they are found. You're brain is always running inference on "what other people think". From a statement like, "I really don't think it's a good idea to homeschool", you're mind might already be guessing at a disagreement you have 3 concepts away, yet only ping you with a "disagreement" alarm.

Combine that with a decent ability to confabulate. You ask yourself "Why do I disagree about homeschooling?" and you are given a plethora of possible reasons to disagree and start talking about those.

comment by Hazard · 2019-01-13T01:04:11.419Z · LW(p) · GW(p)

True if you squint at it right: Learning more about "how things work" is a journey that starts at "Life is a simple and easy game with random outcomes" and ends in "Life is a complex and thought intensive game with deterministic outcomes"

comment by Hazard · 2018-11-01T15:40:03.282Z · LW(p) · GW(p)

Idea that I'm going to use in these short form posts: for ideas/things/threads that I don't feel are "resolved" I'm going to write "*tk*" by the most relevant sentence for easy search later (I vaguely remember Tim Ferris talking about using "tk" as a substitute for "do research and put the real numbers in" since "tk" is not a letter pair that shows up much in English words. )

comment by Hazard · 2018-10-14T13:36:44.714Z · LW(p) · GW(p)

I've taken a lot of programming courses at university, and now I'm taking some more math and proof based courses. I notice that it feels considerably worse to not fully understand what's going on in Real Analysis than it did to not fully understand what was going on in Data Structures and Algorithms.

When I'm coding and pulling on levers I don't understand (outsourcing tasks to a library, or adding this line to the project because, "You just have to so it works") there's a yuck feeling, but there's also, "Well at least it's working now."

Compare that to math. If I'm writing a proof on an exam or a homework, and I don't really know what I'm writing (but you know, I vaguely remember this being what a proof for this sort of problem looks like), it feels like a disgusting waste of time.

comment by Hazard · 2018-08-05T13:14:57.669Z · LW(p) · GW(p)

The other day at a lunch time I realized I'd forgot to make and pack a lunch. It felt odd that I only realized it right when I was about to eat and was looking through my bag for food. Tracing back, I remembered that something abnormal had happened in my morning routine, and after dealing with the pop-up, I just skipped a step in my routine and never even noticed.

One thing I've done semi-intentionally over the past few years is decrease the amount of ambient thought that goes to logistics. I used to consider it to be "useless worrying", but given how a small disruption was able to make me skip a very important step, now I think of it more as trading off efficiency for "robustness".

comment by Hazard · 2018-07-01T14:52:07.993Z · LW(p) · GW(p)

Here is a an abstraction of a type of disagreement:

Claim: it is common for one to be more concerned with questions like, "How should I respond to XYZ system?" over "How should I create an accurate model of XYZ system?"

Let's say the system / environment is social interactions.

Liti: Why are you supposed to give someone a strong handshake when you meet them?

Hale: You need to give a strong handshake

Here Hale misunderstands Liti as asking for information about the proper procedure to perform. Really, Liti wants to know how this system came to be, why do we shake hands in the first place, and why people use it as a proxy for getting the gist of you.

For Hale, it can be frustrating when Liti keeps asking questions, because they've explained everything that seems important and necessary to function in a handshake-scenario.

For Liti this can be frustrating because Hale isn't answering their question, and they feel like they aren't being heard.

comment by Hazard · 2018-02-12T01:45:04.262Z · LW(p) · GW(p)

Last fall I hosted a discussion group with friends on three different occasions. I pitched it as "get interesting people together and intentionally have an interesting conversation" and was not a rationalist discussion group. One thing that I noticed was that whenever I wanted to really fixate on and solve a problem we identified, it felt wrong, like it would break some implicit rule I never remembered setting.

Later I pin pointed the following as the culprit. I personally can't consistantly produce quality clear thinking at "conversational speeds" on things I haven't thought about before (I'd be interested in knowing what the distribution on this ability is). In this case, buckling down and solving the problem would mean having a long pause in the conversation while I and others think.

It also happens that such a pause is generally very uncomfortable for a casual group unless you have very particular norms/rules sanctioning it.

Actionable thought: if you want people to actually try to solve a problem in a group setting, you probably want to make it super okay/normal/acceptable to have long pauses where you turn of your "conversation mind" and go into "serious thought" mode.

Replies from: Raemon
comment by Raemon · 2018-02-12T07:34:32.530Z · LW(p) · GW(p)

Dunno how easy this is to implement in random non-rationalist group settings, but

a) if you're the one who brought the group together, you can set rules. (See Archipelego model of community standards)

b) In NYC (in an admittedly rationalist setting), I had success implementing the 12-second rule of think-before-speaking

comment by Hazard · 2018-02-09T03:24:32.146Z · LW(p) · GW(p)

Highly speculative thought.

I don't often get angry/upset/exasperated with the coding or math that I do, but today I've gotten royally pissed at some Java project of mine. Here's a guess at a possible mechanism.

The more human-like a system feels, the easier it is to anthropomorphize and get angry at. When dealing with my code today, it has felt less like the world of being able to reason carefully over a deterministic system, and more like dealing with an unpredictable possibly hostile agent. Mayhaps part of my brain pattern matches that behaviour to something inteligent -> something human -> apply anger strategy.

comment by Hazard · 2018-10-05T19:49:30.201Z · LW(p) · GW(p)

Good description of what was happening in my head when I went was experiencing the depths of the uncanny valley of rationality:

I was more genre savvy than reality savvy. Even when I first started to learn about biases, I was more genre-of-biases savvy than actual bias-savvy. My first contact with the sequences successfully prevented me from being okay with double-thinking, and mostly removed my ability to feel okay about guiding my life via genre-savvyness. I also hadn't learned enough to make any sort of superior "basis" from which to act and decide. So I hit some slumps.

comment by Hazard · 2018-09-27T00:28:33.258Z · LW(p) · GW(p)

Likely false semi-explicit belief that I've had for a while: changes in patterns of behavior and thought are "merely" a matter of conditioning/training. Whenever it's hard to change behavior, it's just because the system is already in motion in a certain direction, and it takes energy/effort to push it in a new direction.

Now, I'm more aware of some behaviors that seem to have access to some optimization power that has the goal of keeping them around. Some behaviors seem to be part of a deeper strategy run my some sub-process of me, a sub-process that can notice when wrecking-ball Conscious Me is trying to change the behavior, and starts throwing road spikes and slashing my tires. Conscious Me, having previously not had a space for this in its ontology, just went, "Man, sure is hard to change this behavior. Guess I just have to apply more juice or give up."

comment by Hazard · 2018-08-09T13:25:23.833Z · LW(p) · GW(p)

I've always been off-put when someone says, "free will is a delusion/illusion". There seems to be a hinting that one's feelings or experiences are in some way wrong. Here's one way to think you have fundamental free will without being 'deluded' -> "I can imagine a system where agents have an ontologically basic 'decision' option, and it seems like that system would produce experiences that match up with what I experience, therefore I live in a system with fundamental free-will". Here, it's not that you are trapped in an illusion, it's just that you came to a wrong conclusion based on your experience data.

What I think now is -> "My experiences seem consistent with a fundamental free-will universe, and with a deterministic physics universe, and given that the free-will universe doesn't seem super coherent, I'm going to guess I live in the deterministic physics universe." There's probably no sub-circuit in your brain specifically dedicated to fabricating the "experience of free-will".

comment by Hazard · 2018-05-31T14:30:22.195Z · LW(p) · GW(p)

Person I talked to once: "Moral rules are dumb because they aren't going to work in every scenario you're going to encounter. You should just everything case by case."

The thing that feels most wrong about this to me is the proposition that there is an action you can do which is, "Judge everything case by case". I don't think there is. You wouldn't say, "No abstraction covers every scenario, so you should model everything in quarks."

For someone reason or another, it sometimes feels like you can "model things at their most reduced" when pondering a moral decision. But you aren't even close. "Judge everything case by case" arguments seem to come form a place of not knowing how your mind works. Mayhaps it's more of a justification things, where if you say ,"It felt right to me" you're generally off the hook, whereas if you supply principled reasons for your decision making, you open yourself up to criticism (Copenhagen ethics-ish).

comment by Hazard · 2018-04-08T14:43:39.524Z · LW(p) · GW(p)

I can't remember the exact quote or where it came from, so I'm going to paraphrase.

The end goal of meditation is not to be able to calm your mind while you are sitting cross-legged on the floor, it's to be able to calm your mind in the middle of a hurricane.

Mapping this onto rationality, there are two question you can ask yourself.

How rational can I be while making decisions in my room [LW · GW]?

How rational can I be in the middle of a hurricane?

I think the distinction is important because recognizing it allows you to train both skills separately.

Replies from: Elo
comment by Elo · 2018-04-08T23:06:24.876Z · LW(p) · GW(p)

I suspect there is relevance here to maps of different details.

For example playing a ball sport. I can intellectually know a lot more than I can carry out in my system 1 while running from the other players.

For s1 I need tighter models that I can do on the fly. Not sure if that matches perfectly to. Meditating in a hurricane.

comment by Hazard · 2018-04-05T23:49:08.097Z · LW(p) · GW(p)

Some thoughts on a toy model of productivity and well-being

T = set of task

S = set of physiological states

R = level of "reflective acceptance" of current situation (ex. am I doing "good" or "bad")

Quality of Work = some_function(s,t) + stress_applied

Quality of Subjective Experience = Quality - stress + R


Some states are stickier than others. It's easier to jump out of "I'm distracted" then it is to escape "I've got the flu". States can be better or worse at doing tasks, and tasks can be of varying difficulty.

There is some lever which I'm going to call stress (might call willpower) that you can spam to get a non-trivial increase in work output, though it seems to max out pretty fast.

R is very much primal, and also seems to be distinct from S. I generally don't feel bad about not being able to do work when I'm sick (normal R, low S), yet if I'm persistently "just distracted" it's easier to get a bad R value. By default, it seems like R is main feedback loop us humans us to make corrective measures.

Sometimes I feel amazing and can just breeze through work, other times I can barely think. I'm used to trying to maintain a constant quality of work, which mean if I'm in a poor S, more stress is applied, which decreases the quality of the subjective experience, which can have long term negative effects.

The master-level play seems to be to hack you S to consistently be higher quality. Growth-mindset? Diet? Stimulants?


Replies from: Hazard
comment by Hazard · 2019-12-02T00:37:53.625Z · LW(p) · GW(p)

Or, if you're okay with being a bit less of a canonical robust agent [LW · GW] and don't want to take on the costs of reliability [LW · GW], you could try to always match your work to your state. I'm thinking more of "mood" than "state" here. Be infinitely creative chaos.

Oooh, I don't know any blog post the cite, but Duncan mentioned at a CFAR workshop the idea of being a King or a Prophet. Both can be reliable and robust agents. The King does so by putting out Royal Decrees about what they will do, and then executing said plans. The Prophet gives you prophecies about what they will do in the future, and they come true. While you can count on both the decrees of the king and the prophecies of the prophet, the actions of the prophet are more unruly and chaotic, and don't seem to make as much sense as the king's.

comment by Hazard · 2018-03-31T20:30:47.184Z · LW(p) · GW(p)

I notice that there’s almost a sort of pressure that builds up when I look at someone, as if it’s a literal indicator of, “Dude, you’re approaching a socially unacceptable staring time!”

It seems obvious what is going on. If you stare at someone for too long, things get “weird” and you come off as a “creep”. I know that. Most people know that. And since we all have common knowledge about that rule, I understand that there are consequences to staring at someone for more than a second or two. Thus, the reason I don’t stare at people for very long is because I know I will be socially penalized for it.

Except I’m doubting the story that such a line of reasoning is ever computed in the actual scenario. I recently realized that when I don’t have my contacts in (I’ve got really terrible vision), I feel no such pressure to look away from people. I can just stare at a stranger who is only a few feet away from me, and I only feel a vague obligation like, “Hmmm, I mean I guess I should stop staring…”

This seems like weak evidence that my behavior “Not staring at people for too long” is a result of a visual input to action mapping, rather than an implicit reasoning process.

Replies from: Hazard
comment by Hazard · 2018-07-19T20:29:04.101Z · LW(p) · GW(p)

Another example of "I was running a less general and more hacky algorithm than anticipated".

On a bike trip through Vietnam, very few people in the countryside spoke English. Often, we'd just talk at each other in our respective languages and gesticulate wildly to actually make our points.

I noticed that I was still smiling and laughing in response to things said to me in Vietnamese, even though I had no idea what was going on. This has lead me to see the decision to laugh or smile to be mostly based on non-verbal stuff, and not, "Yes, I've understand the thing you have said, and what you said is funny."

comment by Hazard · 2018-03-31T12:48:14.322Z · LW(p) · GW(p)

I'm currently reading The Open Veins of Latin America, which is a detailed history of how Latin America has been screwed over across the centuries. It reminds me of a book I read a while ago, Confessions of an Economic Hit-man. Though it's clear the author thinks that what has happened to Latin America has been unjust, he does a good job of not adding lots of, "and therefor..."s. It's mostly a poetic historical account. There's a lot more cartoonishly evil things that have happened in history than I realized.

I'm simulating bringing up this book to various friends, and in many cases the sim-of-friend feels the need to either go, "Yeah it sucks, but it's not actually that bad because XYZ," or "I know! The globalist/capitalist/materialist west is sooo evil, right?"

This seems to point to a general trend of people not wanting to spend a ton of time dwelling on the data, and instead jumping straight to drawing conclusions.

If you spend enough time dealing with people who are trying to get certain data to support their team, you start to lose you ability to engage with exploring the territory. For some, it might not feel safe to ask about what the U.S did or didn't do in Latin America, because if they agree to the wrong point, they might be forced into the other sides conclusion.

Hold of on proposing solutions. [LW · GW]

comment by Hazard · 2018-03-24T12:42:33.110Z · LW(p) · GW(p)

Fun Framing: Empiricism is trying to predict TheUniverse(t = n + delta) using TheUniverse(t=n) as your blackbox model.

comment by Hazard · 2018-03-18T14:05:13.675Z · LW(p) · GW(p)

Sometimes the teacher makes a typo. In conversation, sometimes people are "just wrong". So a lot of the times, when you notice confusion, it can be dismissed with "the other person just screwed up". But reality doesn't screw up. It just is. Always pay attention to confusion that comes from looking at reality.

(Also, when you come to the conclusion that another person "screwed up", you aren't completely done until you have some understanding of how they might of screwed up)

comment by Hazard · 2018-03-18T13:29:41.168Z · LW(p) · GW(p)

A rephrasing of ideas from the recent Care Less post.

Value allocation is not zero sum, though time allocation is. In order to not break down at the "colossal injustice of it all", a common strategy is to operate as if value is zero-sum.

To be as effective as possible, you need to be able to see the dark world, one that is beyond the reach of God. Do not explain why the current state of affairs is acceptable. Instead, look at reality very carefully and move towards the goal. Explaining why your world is acceptable shuts down the sense that more is possible.

comment by Hazard · 2018-03-16T22:09:23.080Z · LW(p) · GW(p)

I just finished reading and rereading Debt: The First 5000 Years. I was tempted to go, "Yep, makes sense, I was basically already thinking about money and debt like that." Then I remembered that not but two months ago I was arguing with a friend and asserting that there was nothing disfunctional about being able to sell your kidney. It's hard to remember what I used to think about certain things. When there's a concrete reminder, sometimes it comes as a shock that I used to think differently from how I do. For whatever the big things I've changed my mind about in the past few years, I doubt that the the "proper consequences" of those changes have successfully propogated to all corners of my mind. Another thing to watch out for...

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-03-16T23:22:17.473Z · LW(p) · GW(p)

Worth reading the mountains of criticism of this book, e.g. these blog posts. I still got something interesting out of reading it though.

Replies from: Hazard
comment by Hazard · 2018-03-17T00:22:18.367Z · LW(p) · GW(p)

Most of what I've gotten out of the book as been lenses for viewing coordination issue, and less "XYZ events in history happened because of ABC." (and skimming the posts you linked , they seemed more to do with the latter)

I think when I read Nassim Taleb's Black Swan was the first time I immediately afterwards googled "book name criticism". Taleb had made some minor claim about network theory being not used for anything practical, which turned out to just be wrong (a critic cited it being used for developing solutions to malaria outbreaks). Seeing that made me realize had hadn't even wondered whether or not the claim was true when I first read it. Since then I've been more credulous of any given details an author uses, unless it seems like a "basic" element of their realm of expertise (like, I don't doubt and of the anthropological details Graeber presented about the Tiv, though I may disagree with his extrapolations)

comment by Hazard · 2018-02-13T21:51:32.971Z · LW(p) · GW(p)

"It seems like you are arguing/engaging with something I'm not saying."

I can remember a argument with a friend who went to great lengths to defend a point he didn't feel super strongly about, all because he implicitly assumed I was about to go "Given point A, X conclusion, checkmate."

It seems like a pretty common "argumental movement" is to get someone to agree to a few simple propositions, with the goal of later "trapping" them with a dubious "and therefore!". People are good at spotting this, and will often fight you on "facts" because they now the conclusion you are trying to reach (ala The Signal and The Corrective).

It seems like my friend was still running the same defensive mechanism, even when there wasn't intent on my part to trap him in a conclusion.

Often, when someone I'm talking to "argues with something I'm not saying", I don't notice in time, and quickly I also end up arguing a point I don't care about.

comment by Hazard · 2018-02-13T21:39:50.765Z · LW(p) · GW(p)

I really like the phrasing alkjash used, One Inch Punch. Recently I've been paying closer attention to when I'm in "doing" or "trying" mode, and whether or not those are quality handles, there do seem to be multiple forms of "doing" that have distinct qualities to them.

It's way easier for me to "just" get out of bed in the morning, than to try and convince myself getting out of bed is a good idea. It's way easier for me to "just" hit send on an email or message that might not be worded right, rather than convince myself that it's the right move.

When I act on a habit that fights incentives of comfort, there's a part of me that tries to reason me out of it. I've noticed that any engagement with that voice leads to a drastic reduction in the probability that I do the thing (this is much easier to notice with physical actions and habits).

This doesn't apply to all things. There are some things where I genuinely don't know what a good decision looks like, and I know there's very little chance that "just taking action" won't give a stellar result. I have no ideas on a formalism for spotting when to apply a One Inch Punch, and when to engage in deliberation, though I have a feeling that my S1 is getting better at doing such categori

comment by Hazard · 2018-02-12T02:08:44.481Z · LW(p) · GW(p)

I really like the phrasing alkjash used, One Inch Punch. Recently I've been paying closer attention to when I'm in "doing" or "trying" mode, and whether or not those are quality handles, there do seem to be multiple forms of "doing" that have distinct qualities to them.

It's way easier for me to "just" get out of bed in the morning, than to try and convince myself getting out of bed is a good idea. It's way easier for me to "just" hit send on an email or message that might not be worded right, rather than convince myself that it's the right move.

When I act on a habit that fights incentives of comfort, there's a part of me that tries to reason me out of it. I've noticed that any engagement with that voice leads to a drastic reduction in the probability that I do the thing (this is much easier to notice with physical actions and habits).

This doesn't apply to all things. There are some things where I genuinely don't know what a good decision looks like, and I know there's very little chance that "just taking action" won't give a stellar result. I have no ideas on a formalism for spotting when to apply a One Inch Punch, and when to engage in deliberation, though I have a feeling that my S1 is getting better at doing such categorizing.