The Actionable Version of "Keep Your Identity Small" 2019-12-06T01:34:36.844Z · score: 17 (5 votes)
Hard to find factors messing up experiments: Examples? 2019-11-15T17:46:03.762Z · score: 33 (14 votes)
Books/Literature on resolving technical disagreements? 2019-11-14T17:30:16.482Z · score: 13 (2 votes)
Paradoxical Advice Thread 2019-08-21T14:50:51.465Z · score: 13 (6 votes)
The Internet: Burning Questions 2019-08-01T14:46:17.164Z · score: 13 (6 votes)
How much time do you spend on twitter? 2019-08-01T12:41:33.289Z · score: 6 (1 votes)
What are the best and worst affordances of twitter as a technology and as a social ecosystem? 2019-08-01T12:38:17.455Z · score: 6 (1 votes)
Do you use twitter for intellectual engagement? Do you like it? 2019-08-01T12:35:57.359Z · score: 16 (6 votes)
How to Ignore Your Emotions (while also thinking you're awesome at emotions) 2019-07-31T13:34:16.506Z · score: 149 (75 votes)
Where is the Meaning? 2019-07-22T20:18:24.964Z · score: 22 (7 votes)
Prereq: Question Substitution 2019-07-18T17:35:56.411Z · score: 20 (7 votes)
Prereq: Cognitive Fusion 2019-07-17T19:04:35.180Z · score: 15 (6 votes)
Magic is Dead, Give me Attention 2019-07-10T20:15:24.990Z · score: 50 (29 votes)
Decisions are hard, words feel easier 2019-06-21T16:17:22.366Z · score: 9 (6 votes)
Splitting Concepts 2019-06-21T16:03:11.177Z · score: 8 (3 votes)
STRUCTURE: A Hazardous Guide to Words 2019-06-20T15:27:45.276Z · score: 7 (2 votes)
Defending points you don't care about 2019-06-19T20:40:05.152Z · score: 44 (18 votes)
Words Aren't Type Safe 2019-06-19T20:34:23.699Z · score: 24 (10 votes)
Arguing Definitions 2019-06-19T20:29:44.323Z · score: 13 (6 votes)
What is your personal experience with "having a meaningful life"? 2019-05-22T14:03:39.509Z · score: 22 (11 votes)
Models of Memory and Understanding 2019-05-07T17:39:58.314Z · score: 20 (5 votes)
Rationality: What's the point? 2019-02-03T16:34:33.457Z · score: 12 (5 votes)
STRUCTURE: Reality and rational best practice 2019-02-01T23:51:21.390Z · score: 6 (1 votes)
STRUCTURE: How the Social Affects your rationality 2019-02-01T23:35:43.511Z · score: 1 (3 votes)
STRUCTURE: A Crash Course in Your Brain 2019-02-01T23:17:23.872Z · score: 8 (5 votes)
Explore/Exploit for Conversations 2018-11-15T04:11:30.372Z · score: 38 (13 votes)
Starting Meditation 2018-10-24T15:09:06.019Z · score: 24 (11 votes)
Thoughts on tackling blindspots 2018-09-27T01:06:53.283Z · score: 45 (13 votes)
Can our universe contain a perfect simulation of itself? 2018-05-20T02:08:41.843Z · score: 21 (5 votes)
Reducing Agents: When abstractions break 2018-03-31T00:03:16.763Z · score: 42 (11 votes)
Diffusing "I can't be that stupid" 2018-03-24T14:49:51.073Z · score: 56 (18 votes)
Request for "Tests" for the MIRI Research Guide 2018-03-13T23:22:43.874Z · score: 70 (20 votes)
Types of Confusion Experiences 2018-03-11T14:32:36.363Z · score: 31 (9 votes)
Hazard's Shortform Feed 2018-02-04T14:50:42.647Z · score: 31 (9 votes)
Explicit Expectations when Teaching 2018-02-04T14:12:09.903Z · score: 53 (17 votes)
TSR #10: Creative Processes 2018-01-17T03:05:18.903Z · score: 16 (4 votes)
No, Seriously. Just Try It: TAPs 2018-01-14T15:24:38.692Z · score: 42 (14 votes)
TSR #9: Hard Rules 2018-01-09T14:57:15.708Z · score: 32 (10 votes)
TSR #8 Operational Consistency 2018-01-03T02:11:32.274Z · score: 20 (8 votes)
TSR #7: Universal Principles 2017-12-27T01:54:39.974Z · score: 23 (8 votes)
TSR #6: Strength and Weakness 2017-12-19T22:23:57.473Z · score: 3 (3 votes)
TSR #5 The Nature of Operations 2017-12-12T23:37:06.066Z · score: 16 (5 votes)
Learning AI if you suck at math 2017-12-07T15:15:15.480Z · score: 10 (4 votes)
TSR #4 Value Producing Work 2017-12-06T02:44:27.822Z · score: 20 (8 votes)
TSR #3 Entrainment: Discussion 2017-12-01T16:46:35.718Z · score: 25 (9 votes)
Changing habits for open threads 2017-11-26T12:54:27.413Z · score: 9 (4 votes)
Increasing day to day conversational rationality 2017-11-16T21:18:37.424Z · score: 27 (11 votes)
Acknowledging Rationalist Angst 2017-11-06T05:26:45.505Z · score: 30 (12 votes)
Trope Dodging 2017-10-21T18:43:34.729Z · score: 4 (4 votes)


Comment by hazard on The Actionable Version of "Keep Your Identity Small" · 2019-12-06T04:19:21.768Z · score: 2 (1 votes) · LW · GW

I see having a group identity as part of meeting one's needs, albeit their social needs. So basically I still predict that the ease with which you can discard a particular group identity will be proportional to it's monopoly on meeting your social needs.

And then my follow up recommendation is something like, "Find another way to meet those needs before trying to throw away the identity, both for you sanity and to increase odds of success" (though I can imagine changing my tone on that based on particular circumstances)

Is your stance something like, "Regardless of the monopoly it has on meeting your needs, you should discard the group identity as soon as you can identify it, because group identities are just that corrosive"?

Comment by hazard on Hazard's Shortform Feed · 2019-12-04T23:32:25.791Z · score: 5 (3 votes) · LW · GW

Act Short Now

  • Sleeping in
  • Flirting more

Think More Wrong

  • I longer buy that there's a structural difference between math/the formal/a priori and science/the empirical/ a posteriori.
  • Probability theory feels sorta lame.
Comment by hazard on Hazard's Shortform Feed · 2019-12-04T14:32:37.749Z · score: 7 (3 votes) · LW · GW

What am I currently doing to Act Long Now? (Dec 4th 2019)

  • Switching to Roam: Though it's still in development and there are a lot of technical hurdles to this being a long now move (they don't have good import export, it's all cloud hosted and I can't have my own backups), putting ideas into my roam network feels like long now organization for maximized creative/intellectual output over the years.
  • Trying to milk a lot of exploration out of the next year before I start work, hopefully giving myself springboards to more things at points in the future where I might not have had the energy to get started / make the initial push.
  • Being kind.
  • Arguing Politics* With my Best Friends

What am I currently doing to think Less Wrong?

  • Writing more has helped me hone my thinking.
  • Lot's of progress on understanding emotional learning (or more practically, how to do emotional unlearning) allowing me to get to a more even keeled center from which to think and act.
  • Getting better at ignoring the bottom line to genuinely consider what the world would be like for alternative hypothesis.
Comment by hazard on Hazard's Shortform Feed · 2019-12-04T14:20:36.281Z · score: 4 (2 votes) · LW · GW

Yesterday I read the first 5 articles on google for "why arguments are useless". It seems pretty in the zeitgeist that "when people have their identity challenged you can't argue with them. A few of them stopped there and basically declared communication to be impossible if identity is involved, a few of them sequitously hinted at learning to listen and find common ground. A reason I want to get this post out is to add to the "Here's why identity doesn't have to be a stop sign."

Comment by hazard on Naryan Wong's Shortform · 2019-12-03T23:07:01.071Z · score: 2 (1 votes) · LW · GW

Sounds like an interesting crew. I'm also interested to hear how it goes!

Comment by hazard on Call for resources on the link between causation and ontology · 2019-12-03T02:28:32.468Z · score: 2 (1 votes) · LW · GW

On a book recommendation, the Book of Why (review here) gives a well explained intro to some modern (or maybe the cool kids have already moved on to something else) reasoning about differentiating causation and correlation.

Comment by hazard on Naryan Wong's Shortform · 2019-12-02T19:18:13.704Z · score: 2 (1 votes) · LW · GW

Those activities all sound fun and useful, and my gut also says that this will be foreign to a lot of people (i.e most of my friends / people I know at meetups) and it won't actually turn out that well (that's not at all me suggesting to avoid these ideas). Are the people at your meetup already used to these sorts of activities?

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T19:08:56.167Z · score: 2 (1 votes) · LW · GW

Aaaah, I see now. Just edited to what I think fits.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T19:03:44.530Z · score: 2 (1 votes) · LW · GW

Thought that is related to this general pattern, but not this example. Think of having an idea of an end skill that you're excited by (doing bayes updates irl, successfully implementing TAPs, being swayed by "solid logical arguments"). Also imagine not having a theory of change. I personally have sometimes not noticed that there is or could be an actual theory of how to move from A to B (often because I thought I should already be able to do that), and so would use the black box negative reinforcement strategy on myself.

Being in that place involved being stuck for a while and feeling bad about being stuck. Progress was only made when I managed to go "Oh. There are steps to get from A to B. I can't expect to already know them. I most focus on understanding this progression, and not on just punishing myself whenever I fail."

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T01:53:11.540Z · score: 3 (2 votes) · LW · GW

I like that because I can verb it while speaking.

"How much cattle could you fit in this lobby? You can answer directly or mist."

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T01:42:20.487Z · score: 2 (1 votes) · LW · GW

(Meta: the order wasn't important, thanks for thinking about that though)

The selection part is something else I was thinking about. One of my thoughts was your "If there's no way to train PhDs, they die out." And the other was me being a bit skeptical of how big the pool would be right this second if we adopted a really thick skin policy. Reflecting on that second point, I realize I'm drawing from my day to day distribution, and don't have thoughts about how thick skinned most LW people are or aren't.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T01:37:59.278Z · score: 2 (1 votes) · LW · GW

Yeah, I only talked about A after. Is the parenthetical rhetorical? If not I'm missing the thing you want to say.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T00:37:53.625Z · score: 2 (1 votes) · LW · GW

Or, if you're okay with being a bit less of a canonical robust agent and don't want to take on the costs of reliability, you could try to always match your work to your state. I'm thinking more of "mood" than "state" here. Be infinitely creative chaos.

Oooh, I don't know any blog post the cite, but Duncan mentioned at a CFAR workshop the idea of being a King or a Prophet. Both can be reliable and robust agents. The King does so by putting out Royal Decrees about what they will do, and then executing said plans. The Prophet gives you prophecies about what they will do in the future, and they come true. While you can count on both the decrees of the king and the prophecies of the prophet, the actions of the prophet are more unruly and chaotic, and don't seem to make as much sense as the king's.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T00:30:36.522Z · score: 2 (1 votes) · LW · GW

I still think this is genius.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T00:25:43.544Z · score: 3 (2 votes) · LW · GW

When I see this behavior, I worry that the rationalist is setting themselves up to have a blindspot when it comes themselves being "overly sensitive" to feedback. I worry about this because it's happened to me. Not with reactions to feedback but with other things. It's partially the failure mode of thinking that some state is beneath you, being upset and annoyed at others for being in that state, and this disdain making it hard to see when you engage in it.

K, I get that thinking a mistake is trivial doesn't automatically mean your doomed to secretly make it forever. Still, I worry.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T00:16:25.971Z · score: 2 (1 votes) · LW · GW

I've been thinking about this as a general pattern, and have specifically filled in "you should be thick skinned" to make it concrete. Here's a thought that applies to this concrete example that doesn't necessarily apply to the general pattern.

There's all sorts of reasons why someone might feel hurt, put-off, or upset about how someone gives them feedback or disagrees with them. One of these ways can be something like, "From past experience I've learned that someone how uses XYZ language or ABC tone of voice is saying what they said to try and be mean to me, and they will probably try to hurt and bully me in the future."

If you are the rationalist in this situation, you're annoyed that someone thinks you're a bully. You aren't a bully! And it sure would suck if they convinced other people that you were a bully. So you tell them that, duh, you aren't trying to be mean that this is just how you talk, and that they should trust you.

If your the person being told to change, you start to get even more worried (after all, this is exactly what you piece of shit older brother would do to you), this person is telling to trust that they aren't a bully when you have no reason to, and you're worried they're going to turn the bystanders against you.

Hmmmm, after writing this out the problem seems much harder to deal with than I first thought.

Comment by hazard on Hazard's Shortform Feed · 2019-12-01T23:59:56.010Z · score: 3 (2 votes) · LW · GW

The way this can feel to the person being told to change: "None of us care about how hard this is for your, nor the pain you might be feeling right now. Just change already, yeesh." (it can be true or false that the rationalist actually things this. I think I've seen some people playing the rationalist role in this story who explicitly endorsed communicating this sentiment)

Now, I understand that making someone feel emotionally supported takes various levels of effort. Sometimes it might seem like the effort required is not worse the loss in pursing the original rationality target. We could have lots of fruitful discussion about what would be fruitful norms for drawing that line. But I think another problematic thing that can happen, is that in the rationalists rush to get back on track to pursing the important target, they intentionally or unintentionally communicate. "You aren't really in pain. Or if you are, you shouldn't be in pain / you suck or are weak for feeling pain right now." Being told you aren't in pain SUCCCKS, especially when you're in pain. Being reprimanded being in pain SUCCCKS, especially when you're in pain.

Claim: Even if you've reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they're being gaslit about their pain or reprimanded for it.

Comment by hazard on Hazard's Shortform Feed · 2019-12-01T23:49:38.150Z · score: 3 (2 votes) · LW · GW

If you really had no idea... fine, can't do much better than trying to operant conditioning a person towards the end goal. In my world, getting a deep understanding of how to change is the biggest goal/point of rationality (I've given myself away, I care about AI Alignment less than you do ;).

So trying to skip to the rousing debate and clash of ideas while just hoping everyone figures out how to handle it feels like leaving most of the work undone.

Comment by hazard on Hazard's Shortform Feed · 2019-12-01T23:41:59.626Z · score: 2 (1 votes) · LW · GW

You have an exciting idea about how people could do things differently. Or maybe you think of norms which if they became mainstream would drastically increase epistemic sanity. "If people weren't so sensitive and attached to their identities then they could receive feedback and handle disagreements, allowing us to more rapidly work towards the truth." (example picked because versions of this stance have been discussed on LW)

Sometimes the rationalist is thinking "I've got no idea how becoming more or less sensitive, gaining a thicker or thinner skin, or shedding or gaining identity works in humans. So I'm just going to black box this, tell people they should change, negatively reinforce them when they don't, and hope for the best." (ps I don't think everyone thinks this, though I know at least one person who does) (most relevant parts in italics)

Comments will be continued thoughts on this behavior.

Comment by hazard on Hazard's Shortform Feed · 2019-12-01T23:27:12.067Z · score: 2 (1 votes) · LW · GW

This comment will collect things that I think beginner rationalists, "naive" rationalists, or "old school" rationalists (these distinctions are in my head, I don't expect them to translate) do which don't help them.

Comment by hazard on eigen's Shortform · 2019-12-01T20:31:33.723Z · score: 5 (2 votes) · LW · GW

Would recommend the book. I frequently use the models and frames he puts forward in it, and as someone who's only read a small amount of Robin's blog posts, it seems like a lot of his blogging his connected to the ideas he puts in that book.

Comment by hazard on 3 Cultural Infrastructure Ideas from MAPLE · 2019-12-01T20:28:43.055Z · score: 3 (2 votes) · LW · GW

I think this is a very important point, and that anyone trying to build community around a mission should pay attention to it.

Comment by hazard on eigen's Shortform · 2019-12-01T20:22:48.352Z · score: 15 (4 votes) · LW · GW

I haven't done a full re-read, but I have re-read certain chapters. It was hella helpful. The experience was often, "Ohhhh, I only got the shadow of the idea on my first pass, it's grown since then but has been scattered, and the reread let me unify the ideas and feel confident I'm now getting the core idea and it's repercussions."

Comment by hazard on Argue Politics* With Your Best Friends · 2019-11-30T23:34:37.121Z · score: 9 (2 votes) · LW · GW

This idea had been in my head since reading this. It helped me notice how sometimes ceasing to argue with a friend was an indication that I'd "given up on them" as someone I can work well with.

I was already in the "argue with friends" camp, though this post helped me frame it to others. Can't say for sure, but I think this post would have a decent shot at convincing someone who'd previously framed arguments as a bad time.

Comment by hazard on Give praise · 2019-11-30T23:26:13.100Z · score: 8 (4 votes) · LW · GW
I've asked around, and it even seems to be considered "normal" in most (non-systematizing) communities to habitually give praise. It's even apparently something people regard as necessary for proper psychological health. But honestly, apart from volunteering at CFAR, I can't recall getting much praise for anything I've done for the community.

I think there are many "standard well-being norms" that LW, and seeing this lack has often shocked me. Giving praise seems like super low hanging fruit, and I want to signal boost this.

(Though I do want to this to be boosted more into LW psyche, I do admit that this post is not what I think of when I imagine "LW's peer reviewed canonical knowledge")

Comment by hazard on Unrolling social metacognition: Three levels of meta are not enough. · 2019-11-29T23:26:08.960Z · score: 4 (2 votes) · LW · GW

This post (alongside Duncan's post Common Knowledge and Miasma) has been in my head since I read it. It has directed my attention to the many instances in which I can unroll a "simple" seeming judgment into multiple levels of modeling people's metas.

Things I'd like to see:

  • This post doesn't really argue much that this happens commonly. It just gives you the idea of the mechanisms, and upon reading I went "Oh shit, of course, I see this all the time." It could be useful to see what research there is on people make these sorts of recursive mental models.
  • I think to really hit the point home, a few examples should be given that are complex enough that to fully unroll them drawings are needed, and yet the simple English description still is easily parse-able ("I don't think she liked that he was being weird to everyone")

Comment by hazard on Hazard's Shortform Feed · 2019-11-29T19:25:07.256Z · score: 6 (3 votes) · LW · GW

I've been having fun reading through Signals: Evolution, Learning, & Information. Many of the scenarios revolve around variations of the Lewis Signalling Game. It's a nice simple model that lets you talk about communication without having to talk about intentionality (what you "meant" to say).

Intention seems to mostly be about self-awareness of the existing signalling equilibrium. When I speak slowly and carefully, I'm constantly checking what I want to say against my understanding of our signalling equilibrium, and reasoning out implications. If I scream when I see a tiger, I'm still signalling, but various facts about the signalling equilibrium are not booted into consciousness.

So, claim: Lewis style signalling games are the root of all communication, from humans to dogs to bacteria. The "extra" stuff that humans seem to have which is often called intent as to do with having other/additional reasoning abilities, and being able to load one's signalling equilibrium into that reasoning system to further engage in shenanigans.

Comment by hazard on Hazard's Shortform Feed · 2019-11-29T03:24:28.663Z · score: 5 (3 votes) · LW · GW

Re Mental Mountains, I think one of the reasons that I get worried when I meet another youngin that is super gung-ho about rationality/"being logical and coherent", is that I don't expect them to have a good Theory of How to Change Your Mind. I worry that they will reason out a bunch of conclusions, succeed in high-level changing their minds, think that they've deeply changed their minds, but instead leave hoards of unresolved emotional memories/models that they learn to ignore and fuck them up later.

Comment by hazard on Hazard's Shortform Feed · 2019-11-25T21:19:17.414Z · score: 3 (2 votes) · LW · GW

I'm currently turning my notes from this class into some posts, and I'll wait to continue this until I'm able to get those up. Then, hopefully, it will be easier to see if this notion of simplicity is lacking. I'll let you know when that's done.

Comment by hazard on Hazard's Shortform Feed · 2019-11-25T05:06:55.690Z · score: 2 (1 votes) · LW · GW

I think me adding more details will clear things up.

The setup presupposes a certain amount of realism. Start with Possible Worlds Semantics, where logical propositions are attached to / refer to the set of possible worlds in which they are true. A hypothesis is some proposition. We think of data as getting some proposition (in practice this is shaped by the methods/tools you have to look at and measure the world), which narrows down the allowable possible worlds consistent with the data.

Now is the part that I think addresses what you were getting at. I don't think there's a direct analog in my setup to your (a). You could consider the hypothesis/proposition, "the set of all worlds compatible with the data I have right now", but that's not quite the same. I have more thoughts, but first, do you still want feel like you idea is relevant to the setup I've described?

Comment by hazard on RAISE post-mortem · 2019-11-24T19:14:54.150Z · score: 15 (7 votes) · LW · GW

Thanks for writing this! I'm glad you've found a new trajectory, and it looks like you've done a decent amount to process and integrate RAISE not having worked out. Best of luck on the next chapter.

Comment by hazard on Hazard's Shortform Feed · 2019-11-24T18:21:47.609Z · score: 6 (4 votes) · LW · GW

"Moving from fossil fuels to renewable energy" but as a metaphor for motivational systems. Nate Soares replacing guilt seems to be trying to do this.

With motivation, you can more easily go, "My life is gonna be finite. And it's not like someone else has to deal with my motivation system after I die, so why not run on guilt and panic?"

Hmmmm, maybe something like, "It would be doper if large scale people got to more renewable motivational systems, and for that change to happen it feels important for people growing up to be able to see those who have made the leap."

Comment by hazard on Hazard's Shortform Feed · 2019-11-24T18:15:55.697Z · score: 4 (2 votes) · LW · GW

There are two times when Occam's razor comes to mind. One is for addressing "crazy" ideas ala "The witch down the road did it" and one is for picking which legit seeming hypothesis might I prioritize in some scientific context.

For the first one, I really like Eliezer's reminder that when going with "The witch did it" you have to include the observed data in your explanation.

For the second one, I've been thinking about the simplicity formulation that one of my professors uses. Roughly, A is simpler than B if all data that is consistent with A is a subset of all data that is consistent with B.

His motivation for using this notion has to do with minimizing the number of times you are forced to update.

Comment by hazard on Hazard's Shortform Feed · 2019-11-24T01:01:47.174Z · score: 4 (2 votes) · LW · GW

"Contradictions aren't bad because they make you explode and conclude everything, they're bad because they don't tell you what to do next."

Quote from a professor of mine who makes formalisms for philosophy of science stuff.

Comment by hazard on Junto: Questions for Meetups and Rando Convos · 2019-11-23T14:49:03.103Z · score: 3 (2 votes) · LW · GW

Thanks for this list! I'm thinking about ways I might knit these into casual conversation.

Comment by hazard on Hard to find factors messing up experiments: Examples? · 2019-11-19T16:21:55.864Z · score: 2 (1 votes) · LW · GW

I think I see what you mean. Discovering rats can navigate mazes via sounds is a new interesting thing. You could go on to study more about how that works and how it interacts with other things rats do. Though if you just wanted to look at a some aspect of how rats learn, you'd have to account for the fact that in your experiment their hearing could circumvent some aspect of your design.

Comment by hazard on Form and Feedback in Phenomenology · 2019-11-19T00:08:48.290Z · score: 2 (1 votes) · LW · GW

Off, this takes me some careful attention to follow (pretty sure it's the subject matter, not your writing). Enjoying it so far. While reading this, I have the experience of slowly figuring out that a thing your using phenomenological language to talk about matches up with some aspect of my own thinking, and I'm on edge waiting to find when (if ever) they diverge.

Comment by hazard on The World as Phenomena · 2019-11-17T22:03:26.635Z · score: 4 (2 votes) · LW · GW

I've started reading the first few posts in this sequence and I'm very appreciative of your attention to detail concerning the origins of various terms, how others have used them, and how you are using them.

Comment by hazard on Hard to find factors messing up experiments: Examples? · 2019-11-17T15:41:35.677Z · score: 4 (2 votes) · LW · GW

More on the painstaking effort. I was gesturing at the fact that Young probably didn't think the floors had anything to do with running rat mazes at the beginning, and later it turned out to be important. Through painstaking effort he realized this was what was messing stuff up.

Comment by hazard on [Productivity] Task vs. time delimitation · 2019-11-17T13:52:30.259Z · score: 3 (2 votes) · LW · GW

Cool 2x2. Are there any particular ways that doing a categorization of a task affects your work flow?

Comment by hazard on Personal Experiment: Counterbalancing Risk-Adversion · 2019-11-16T00:35:53.385Z · score: 3 (2 votes) · LW · GW

Seconded. Any examples of decisions the OP would be okay with sharing I'd find useful. I'm especially curious about the factors that made it seem clear in retrospect which was the "right" option.

Comment by hazard on Hard to find factors messing up experiments: Examples? · 2019-11-16T00:26:41.140Z · score: 4 (2 votes) · LW · GW

Thanks! This is an okay article that adds more exposition, and it looks like this incident spawned a new protocol for avoiding swab contamination.

Comment by hazard on Practical Guidelines for Memory Reconsolidation · 2019-11-14T21:17:27.549Z · score: 5 (2 votes) · LW · GW
If at any point, you encounter resistance to working on a particular technique with a particular schema, what you've found is a "Meta-schema" that believes changing this belief would be harmful. Rather than push through this resistance, loop back to the beginning of the Debugging process, and work with this new schema.

This was specific and useful. I think most "resistance is good" advice I've heard has not made the subtle point of needing to address the meta-schema.

Comment by hazard on Books/Literature on resolving technical disagreements? · 2019-11-14T20:59:36.851Z · score: 2 (1 votes) · LW · GW

Thanks. Any particular key words or fields you'd suggest?

Comment by hazard on TurnTrout's shortform feed · 2019-11-13T19:01:23.568Z · score: 2 (1 votes) · LW · GW

This happens to me sometimes. I know several people who have this happen at the end of a Uni semester. Hope you can get some rest.

Comment by hazard on Hazard's Shortform Feed · 2019-11-12T02:24:16.675Z · score: 2 (1 votes) · LW · GW

Have some horrible jargon: I spit out a question or topic and ask you for your NeMRIT, your Next Most Relevant Interesting Take.

Either give your thoughts about the idea I presented as you understand it, unless that's boring, then give thoughts that interests you that seem conceptually closest to the idea I brought up.

Comment by hazard on Hazard's Shortform Feed · 2019-11-11T20:02:42.789Z · score: 2 (1 votes) · LW · GW

Kevin Zollman at CMU looks like he's done a decent amount of research on group epistemology. I plan to read the deets at some point, here's a link if anyone wanted to do it first and post something about it.

Comment by hazard on Randomness vs. Ignorance · 2019-11-08T16:15:25.415Z · score: 2 (1 votes) · LW · GW

I've seen some academic talk of this. Adam Bjorndahl at CMU has written some papers where he reframes situations that normally have randomness as being about the ignorance of an agent. Noting that his papers are very technical and I don't know what if any good general insights there are to glean from them.

Comment by hazard on Eli's shortform feed · 2019-11-05T00:15:07.140Z · score: 2 (1 votes) · LW · GW

Relating to the "Perception of Progress" bit at the end. I can confirm for a handful of physical skills I practice there can be a big disconnect between Perception of Progress and Progress from a given session. Sometimes this looks like working on a piece of sleight of hand, it feeling weird and awkward, and the next day suddenly I'm a lot better at it, much more than I was at any point in the previous days practice.

I've got a hazy memory of a breakdancer blogging about how a particular shade of "no progress fumbling" can be a signal that a certain about of "unlearning" is happening, though I can't find the source to vet it.

Comment by hazard on Aella on Rationality and the Void · 2019-11-01T16:48:09.110Z · score: 2 (1 votes) · LW · GW