Finding a quote: "proof by contradiction is the closest math comes to irony" 2019-12-26T17:40:41.669Z · score: 9 (3 votes)
ToL: Methods and Success 2019-12-10T21:17:56.779Z · score: 9 (2 votes)
ToL: This ONE WEIRD Trick to make you a GENIUS at Topology! 2019-12-10T21:02:40.212Z · score: 11 (3 votes)
ToL: The Topological Connection 2019-12-10T20:29:06.998Z · score: 12 (4 votes)
ToL: Introduction 2019-12-10T20:19:06.029Z · score: 12 (7 votes)
ToL: Foundations 2019-12-10T20:15:09.099Z · score: 11 (3 votes)
Books on the zeitgeist of science during Lord Kelvin's time. 2019-12-09T00:17:30.207Z · score: 32 (5 votes)
The Actionable Version of "Keep Your Identity Small" 2019-12-06T01:34:36.844Z · score: 63 (28 votes)
Hard to find factors messing up experiments: Examples? 2019-11-15T17:46:03.762Z · score: 33 (14 votes)
Books/Literature on resolving technical disagreements? 2019-11-14T17:30:16.482Z · score: 13 (2 votes)
Paradoxical Advice Thread 2019-08-21T14:50:51.465Z · score: 13 (6 votes)
The Internet: Burning Questions 2019-08-01T14:46:17.164Z · score: 13 (6 votes)
How much time do you spend on twitter? 2019-08-01T12:41:33.289Z · score: 6 (1 votes)
What are the best and worst affordances of twitter as a technology and as a social ecosystem? 2019-08-01T12:38:17.455Z · score: 6 (1 votes)
Do you use twitter for intellectual engagement? Do you like it? 2019-08-01T12:35:57.359Z · score: 16 (6 votes)
How to Ignore Your Emotions (while also thinking you're awesome at emotions) 2019-07-31T13:34:16.506Z · score: 149 (75 votes)
Where is the Meaning? 2019-07-22T20:18:24.964Z · score: 22 (7 votes)
Prereq: Question Substitution 2019-07-18T17:35:56.411Z · score: 20 (7 votes)
Prereq: Cognitive Fusion 2019-07-17T19:04:35.180Z · score: 15 (6 votes)
Magic is Dead, Give me Attention 2019-07-10T20:15:24.990Z · score: 50 (29 votes)
Decisions are hard, words feel easier 2019-06-21T16:17:22.366Z · score: 9 (6 votes)
Splitting Concepts 2019-06-21T16:03:11.177Z · score: 8 (3 votes)
STRUCTURE: A Hazardous Guide to Words 2019-06-20T15:27:45.276Z · score: 7 (2 votes)
Defending points you don't care about 2019-06-19T20:40:05.152Z · score: 44 (18 votes)
Words Aren't Type Safe 2019-06-19T20:34:23.699Z · score: 24 (10 votes)
Arguing Definitions 2019-06-19T20:29:44.323Z · score: 13 (6 votes)
What is your personal experience with "having a meaningful life"? 2019-05-22T14:03:39.509Z · score: 22 (11 votes)
Models of Memory and Understanding 2019-05-07T17:39:58.314Z · score: 20 (5 votes)
Rationality: What's the point? 2019-02-03T16:34:33.457Z · score: 12 (5 votes)
STRUCTURE: Reality and rational best practice 2019-02-01T23:51:21.390Z · score: 6 (1 votes)
STRUCTURE: How the Social Affects your rationality 2019-02-01T23:35:43.511Z · score: 1 (3 votes)
STRUCTURE: A Crash Course in Your Brain 2019-02-01T23:17:23.872Z · score: 8 (5 votes)
Explore/Exploit for Conversations 2018-11-15T04:11:30.372Z · score: 38 (13 votes)
Starting Meditation 2018-10-24T15:09:06.019Z · score: 24 (11 votes)
Thoughts on tackling blindspots 2018-09-27T01:06:53.283Z · score: 45 (13 votes)
Can our universe contain a perfect simulation of itself? 2018-05-20T02:08:41.843Z · score: 21 (5 votes)
Reducing Agents: When abstractions break 2018-03-31T00:03:16.763Z · score: 42 (11 votes)
Diffusing "I can't be that stupid" 2018-03-24T14:49:51.073Z · score: 56 (18 votes)
Request for "Tests" for the MIRI Research Guide 2018-03-13T23:22:43.874Z · score: 70 (20 votes)
Types of Confusion Experiences 2018-03-11T14:32:36.363Z · score: 31 (9 votes)
Hazard's Shortform Feed 2018-02-04T14:50:42.647Z · score: 31 (9 votes)
Explicit Expectations when Teaching 2018-02-04T14:12:09.903Z · score: 53 (17 votes)
TSR #10: Creative Processes 2018-01-17T03:05:18.903Z · score: 16 (4 votes)
No, Seriously. Just Try It: TAPs 2018-01-14T15:24:38.692Z · score: 42 (14 votes)
TSR #9: Hard Rules 2018-01-09T14:57:15.708Z · score: 32 (10 votes)
TSR #8 Operational Consistency 2018-01-03T02:11:32.274Z · score: 20 (8 votes)
TSR #7: Universal Principles 2017-12-27T01:54:39.974Z · score: 23 (8 votes)
TSR #6: Strength and Weakness 2017-12-19T22:23:57.473Z · score: 3 (3 votes)
TSR #5 The Nature of Operations 2017-12-12T23:37:06.066Z · score: 16 (5 votes)
Learning AI if you suck at math 2017-12-07T15:15:15.480Z · score: 10 (4 votes)


Comment by hazard on The Relational Stance · 2020-02-12T17:17:24.437Z · score: 4 (3 votes) · LW · GW


Comment by hazard on A Cautionary Note on Unlocking the Emotional Brain · 2020-02-09T15:00:17.580Z · score: 8 (4 votes) · LW · GW

Thanks for sharing! ++ for "I tried the thing, this is how it went" post

Comment by hazard on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T14:19:40.282Z · score: 9 (6 votes) · LW · GW

It might be useful to know that I'm not that sold on a lot of singularity stuff, and the parts of rationality that have affected me the most are some of the more general thinking principles. "Look at the truth even if it hurts" / "Understanding tiny amounts of evo and evo psyche ideas" / "Here's 18 different biases, now you can tear down most people's arguments".

It was those ideas (a mix of the naive and sophisticated form of them) + my own idiosyncrasies that caused me a lot of trouble. So that's why I say "rationalist memes". I guess that if I bought more singularity stuff I might frame it as "weird but true ideas".

Comment by hazard on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T22:17:48.575Z · score: 6 (6 votes) · LW · GW

I found this a very useful post. It feels like a key piece in helping me think about CFAR, but also it sharpens my own sense of what stuff in "rationality" feels important to me. Namely "Helping people not have worse lives after interacting with rationalist memes"

Comment by hazard on "human connection" as collaborative epistemics · 2020-01-13T03:17:19.449Z · score: 6 (3 votes) · LW · GW
Bar the lone soul on a heroic dissent, I don't think most of us are able to keep meaningfully developing our worldview if there is no one to enthusiastically share our findings with.

Some version of this feels pretty important.

Comment by hazard on Hazard's Shortform Feed · 2020-01-13T02:26:09.987Z · score: 4 (3 votes) · LW · GW

So a thing Galois theory does is explain:

Why is there no formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)?

Which makes me wonder; would there be a formula if you used more machinery that normal stuff and radicals? What does "more than radicals" look like?

Comment by hazard on Hazard's Shortform Feed · 2020-01-12T18:31:03.346Z · score: 2 (1 votes) · LW · GW

I'm noticing an even more granular version of this. Things that I might do casually (reading some blog posts) have a significant effect on what's loaded into my mind the next day. Smaller than the week level, I'm noticing a 2-3 day cycle of "the thing that was most recently in my head" and how it effects the question of "If I could work on anything rn what would it be?"

This week on Tuesday I picked Wednesday as the day I was going to write a sketch. But because of something I was thinking before going to bed, on Wednesday my head was filled with thoughts on urbex. So I switched gears, and urbex thoughts ran their course through Wednesday, and on Thursday I was ready to actually write a sketch (comedy thoughts need to be loaded for that)

Comment by hazard on Hazard's Shortform Feed · 2020-01-05T14:33:06.023Z · score: 5 (3 votes) · LW · GW

I've been writing on twitter more lately. Sometimes when I'm trying to express and idea, to generate progress I'll think "What's the shortest sentence I can write that convinces me I know what I'm talking about?" This is different from "What's a simple but no simpler explanation for the reader?"

Starting a twitter thread and forcing several tweet sized chunk of ideas out are quite helpful for that. It helps get the concept clearer in my head, and then I have something out there and I can dwell on how I'd turn it into a consumable for others.

Comment by hazard on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T03:33:05.735Z · score: 5 (3 votes) · LW · GW
[...] and yet suppose that I were invited to write for a venue where my ideas would never be challenged, where my writing were not subjected to scrutiny, where no interested and intelligent readers would ask probing questions… shouldn’t I expect my writing (and my ideas!) to degrade?

I'm not completely swayed either way, but I want to acknowledge this as an important and interesting point.

Comment by hazard on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T22:39:53.339Z · score: 5 (3 votes) · LW · GW

Very useful comment, in that I have not previously imagined that this was your, or anyone else's, normative view on responding to comments.

Comment by hazard on Moloch Hasn’t Won · 2019-12-28T18:32:25.441Z · score: 6 (3 votes) · LW · GW

I'm quite interested in the rest of this. Though I did find the idea of Moloch useful for responding to the most naive forms of "If we all did X everything would be perfect", I also have a vague feel that rationalist's belief in Moloch being all powerful prevents them from achieving totally achievable levels of group success.

Comment by hazard on Values Assimilation Premortem · 2019-12-28T18:21:43.669Z · score: 3 (2 votes) · LW · GW

More or less. Here are some related pieces of content:

There's a twitter thread by Qiaochu that ostensibly is about addiction, but has the idea "It's more useful to examine what you're running from, than what you're running to." In the context of our conversation, the Christianity and Rationalism would be "what you've been running to" and "what you're running from" (for me) has been social needs not being met, not having a lot of personal agency, etc.

Meaningness is an epic tome by David Chapman on different attitudes towards meaning that one can take and their repercussions.

Regarding regarding examples and generalizing, I've been finding it that it's really hard to feel like I've changed my mind in any substantive way, unless I can find the examples and memories of events that lead me to believe a general claim in the first place, and address those examples. Matt Goldenberg has a sequence on a specific version of this idea.

Comment by hazard on Values Assimilation Premortem · 2019-12-26T18:46:35.316Z · score: 9 (6 votes) · LW · GW

Hi, welcome to LW! Fellow deconverted christian here. I've both gone through some crisis mode deconverting from christianity, and some crisis mode when exploring and undoing some of the faux-rational patches I had made during the first crisis. Can't wait for round three :)

I'm happy to give some more thoughts, though it might be useful for you to enumerate a few example beliefs / behaviors that you are adopting and now rethinking. "rationalist" is a pretty big space and there's many different strokes for many different folks.

As a very general thought, I'm currently exploring the idea that most of my problems aren't related to big picture philosophy / world-view stuff, and more matters of increasing personal agency (i.e "Do I feel stressed from not enough money?" "Am I worried about the security of my job?" "Can I reliably have fun conversations?" "Can I spend time with people who love me?" "Does my body feel good?" etc). Though admittedly, I had to arrive at this stance via big picture world-view style thinking. Might be useful to dwell on.

Comment by hazard on Finding a quote: "proof by contradiction is the closest math comes to irony" · 2019-12-26T18:24:20.866Z · score: 2 (1 votes) · LW · GW

Thank you Gwern! This was it.

Comment by hazard on Unrolling social metacognition: Three levels of meta are not enough. · 2019-12-18T18:47:01.611Z · score: 4 (2 votes) · LW · GW

Knots by R.D Laing is full of really on the point examples of multi-step inferences that often get condensed into a single feeling. If this post interests you at all, I think reading said book will be useful.

Comment by hazard on Is Causality in the Map or the Territory? · 2019-12-18T15:55:44.658Z · score: 4 (2 votes) · LW · GW

This and the parent comment were quite helpful for getting a more nuanced sense of what you're up to.

Point is: all of these models are operating at a pretty high level of abstraction, compared to the underlying physical reality. But it still seems like some abstract causal models are "right" and others are "wrong".

Good summary.

Comment by hazard on Is Causality in the Map or the Territory? · 2019-12-18T15:54:36.160Z · score: 5 (3 votes) · LW · GW

Positive reinforcement for noticing getting nerdsniped and mentioning it!

Comment by hazard on Hazard's Shortform Feed · 2019-12-16T22:14:44.092Z · score: 2 (1 votes) · LW · GW

Yeah. I guess the only place I can remember seeing it referenced in actions was with regard to assigning priors for solomonoff induction. So I wonder if it changes anything there (though solomonoff is already pretty abstracted away from other things, so it might not make sense to do a sensitivity analysis)

Comment by hazard on Hazard's Shortform Feed · 2019-12-16T20:26:39.841Z · score: 3 (2 votes) · LW · GW

So Kolmogorov Complexity depends on the language, but the difference between complexity in any two languages differs by at most a constant (what ever the size of an interpreter from one to the other is).

This seems to mean that the complexity ordering of different hypothesis can be rearranged by switching languages, but "only so much". So


are both totally possible, as long as

I see how if you care about orders of magnitude, the description language probably doesn't matter. But if you ever had to make a decision where it mattered if the complexity was 1,000,000 vs 1,000,001 then language does matter.

Where is KC actually used, and in those contexts how sensitive are results to small reordering like the one I presented?

Comment by hazard on Computational Model: Causal Diagrams with Symmetry · 2019-12-13T20:15:24.089Z · score: 4 (2 votes) · LW · GW
Causal DAGs with symmetry are how we do this for Turing-computable functions in general. They show the actual cause-and-effect process which computes the result; conceptually they represent the computation rather than a black-box function.

This was the main interesting bit for me.

Comment by hazard on Causal Abstraction Toy Model: Medical Sensor · 2019-12-13T20:09:04.897Z · score: 4 (2 votes) · LW · GW

I enjoyed this! I had to read through the middle part twice; is the idea of the basically "it depends on what the distributions are, but there is another simple stat you can computer from the , which combined with their average, gives you all the info you need"?

I liked that this was a simple example of how choices in the way you abstract do or don't lose different information.

Comment by hazard on jacobjacob's Shortform Feed · 2019-12-12T21:52:20.371Z · score: 2 (1 votes) · LW · GW

Agree with you and the OP, and note that the difference between my mental trope of gym and dojo is that I can go to the gym whenever, but is a place where practices happen at specifically scheduled times. I can see wanting both.

Comment by hazard on TurnTrout's shortform feed · 2019-12-12T19:08:03.098Z · score: 4 (2 votes) · LW · GW

Yay learning all the things! Your reviews are fun, also completely understandable putting energy elsewhere. Your energy for more learning is very useful for periodically bouncing myself into more learning.

Comment by hazard on TurnTrout's shortform feed · 2019-12-12T17:39:17.386Z · score: 4 (2 votes) · LW · GW

Have you been continuing your self-study schemes into realms beyond math stuff? If so I'm interested in both the motivation and how it's going! I remember having little interest in other non-physics science growing up, but that was also before I got good at learning things and my enjoyment was based on how well it was presented.

Comment by hazard on [Review] Meta-Honesty (Ben Pace, Dec 2019) · 2019-12-12T06:06:53.634Z · score: 3 (2 votes) · LW · GW

I also anticipate I'll write my own review/commentary on the OP, so mayhaps I can expand more on my thoughts and you can have more to respond to.

Comment by hazard on ToL: Foundations · 2019-12-11T18:43:31.969Z · score: 2 (1 votes) · LW · GW
In other words, does I(w) equal {A∈I:w∈A}?

This one!

Comment by hazard on [Review] Meta-Honesty (Ben Pace, Dec 2019) · 2019-12-11T15:24:45.359Z · score: 3 (2 votes) · LW · GW

I think it's very important to see that there are at least two different ideas/norms around honesty being proposed. There's:

[Living out meta-honesty in real life means] stopping and asking yourself "Would I be willing to publicly defend this as a situation in which unusually honest people should lie, if somebody posed it as a hypothetical?"

Which is a suggestion for your standards of object level honesty, and separately there is:

And so he simply suggests that on top of this, you should be absolutely honest about where you'll likely be honest and dishonest.

The idea that you should be meta-honest. You can think about them completely separately, and at the beginning I found lumping them together to make it harder for me to get why the meta-honesty part mattered.

I could be 100% meta honest (when the code of meta-honesty is invoked), and still have an object level honesty policy that you/EY might consider way too loose.

Comment by hazard on The Intelligent Social Web · 2019-12-11T15:10:14.325Z · score: 6 (3 votes) · LW · GW
In my opinion, the biggest shift in the study of rationality since the Sequences were published were a change in focus from "bad math" biases (anchoring, availability, base rate neglect etc.) to socially-driven biases.

Funny enough, when I did a reread through the sequence, I saw a huge number of little ways EY was pointing to various socially driven biases, which I'd missed the first time around. I think it might have been a framing thing, where because it didn't feel like those bits were the main point of the essays, I smashed them all into "Don't be dumb/conformist" (a previous notion I could round off to).

Also great review.

Comment by hazard on ToL: Methods and Success · 2019-12-10T22:01:40.106Z · score: 3 (2 votes) · LW · GW

1) noting that all the research is Kevin Kelly's, I'm just taking his class 2) I agree that it seems underexplored and interesting.

meta: agreed. I'm putting all the posts up now for logistical reasons related to the class.

Comment by hazard on Books on the zeitgeist of science during Lord Kelvin's time. · 2019-12-10T21:49:06.788Z · score: 2 (1 votes) · LW · GW

Yikes, I fell for it. To your knowledge is there any period in the history of physics were prominent scholars seemed to think that most of the work was done?

Comment by hazard on Books on the zeitgeist of science during Lord Kelvin's time. · 2019-12-10T15:04:51.175Z · score: 2 (3 votes) · LW · GW

My understanding is that at least around Kelvins time, there was a general attitude of "we've almost figured out all the stuff". I'm very curious about what it looks like to have many scientists thinking that. My history is weak enough that I don't know how widespread that sentiment was, nor how long it was around. I only picked Kelvin as a marker of that.

Comment by hazard on The Actionable Version of "Keep Your Identity Small" · 2019-12-07T15:34:06.466Z · score: 7 (3 votes) · LW · GW

Lol, this is the post I wanted to write but better. Thanks Kaj! To anyone who ended up here, go read Ruby's post.

Comment by hazard on Raemon's Scratchpad · 2019-12-06T23:08:48.578Z · score: 5 (3 votes) · LW · GW

+1 excitement about bookshelves :)

Comment by hazard on The Actionable Version of "Keep Your Identity Small" · 2019-12-06T22:50:02.592Z · score: 6 (4 votes) · LW · GW

The clear API point is a very useful one. It feels like the difference between "I need people to think I'm XYZ way or they won't like me" and "I'll provide people this simple XYZ way to think about me so that they can interact with me at all".

To add to your suggestion, and to speak to the imagined person who feels very of putting out an API-Identity, there are all sorts of ways you can phrase you communication to express "This is a public facing API, inquire inside if you're curious for more details."

Comment by hazard on The Actionable Version of "Keep Your Identity Small" · 2019-12-06T16:44:17.772Z · score: 10 (4 votes) · LW · GW

I'm claiming that identity behaviors (verbally identifying as a member, consider your group membership important, wearing group style clothing or accessories, becoming less reasonable when your group is being criticized, etc) stem from a group having a monopoly on meeting your social needs, combined with insecurity and fear about the prospect of your needs no longer being met.

So yeah, I do think that you can get your social needs met by participating in groups without engaging in identity behavior (as you've suggested). I could be part of many different social circles and have lots of fulfilling relationships, and consider the stuff I do in each group important, and yet not engage much in identity behavior. I could also be a part of only one group.

I also agree that identity behavior can often be harmful. The main point I'm making is that (Fear about needs being met) -> (Identity behavior), and that if you only try to manage and tamp down Identity behavior, the pressures that created that behavior will still be present.

An example of (Fear about needs being met) -> (Identity behavior). You have only one friend group and it's a bunch of young graffiti artists. Someone argues with you that graffiti is harmful for the community. You fight vehemently to defend graffiti, because deep down you know that without graffiti, your group of friends wouldn't exist, and then you'd be alone.

If none of that jives, can you expand on what you're thinking about identity?

Comment by hazard on The Actionable Version of "Keep Your Identity Small" · 2019-12-06T04:19:21.768Z · score: 7 (3 votes) · LW · GW

I see having a group identity as part of meeting one's needs, albeit their social needs. So basically I still predict that the ease with which you can discard a particular group identity will be proportional to it's monopoly on meeting your social needs.

And then my follow up recommendation is something like, "Find another way to meet those needs before trying to throw away the identity, both for you sanity and to increase odds of success" (though I can imagine changing my tone on that based on particular circumstances)

Is your stance something like, "Regardless of the monopoly it has on meeting your needs, you should discard the group identity as soon as you can identify it, because group identities are just that corrosive"?

Comment by hazard on Hazard's Shortform Feed · 2019-12-04T23:32:25.791Z · score: 7 (4 votes) · LW · GW

Act Short Now

  • Sleeping in
  • Flirting more

Think More Wrong

  • I longer buy that there's a structural difference between math/the formal/a priori and science/the empirical/ a posteriori.
  • Probability theory feels sorta lame.
Comment by hazard on Hazard's Shortform Feed · 2019-12-04T14:32:37.749Z · score: 7 (3 votes) · LW · GW

What am I currently doing to Act Long Now? (Dec 4th 2019)

  • Switching to Roam: Though it's still in development and there are a lot of technical hurdles to this being a long now move (they don't have good import export, it's all cloud hosted and I can't have my own backups), putting ideas into my roam network feels like long now organization for maximized creative/intellectual output over the years.
  • Trying to milk a lot of exploration out of the next year before I start work, hopefully giving myself springboards to more things at points in the future where I might not have had the energy to get started / make the initial push.
  • Being kind.
  • Arguing Politics* With my Best Friends

What am I currently doing to think Less Wrong?

  • Writing more has helped me hone my thinking.
  • Lot's of progress on understanding emotional learning (or more practically, how to do emotional unlearning) allowing me to get to a more even keeled center from which to think and act.
  • Getting better at ignoring the bottom line to genuinely consider what the world would be like for alternative hypothesis.
Comment by hazard on Hazard's Shortform Feed · 2019-12-04T14:20:36.281Z · score: 4 (2 votes) · LW · GW

Yesterday I read the first 5 articles on google for "why arguments are useless". It seems pretty in the zeitgeist that "when people have their identity challenged you can't argue with them. A few of them stopped there and basically declared communication to be impossible if identity is involved, a few of them sequitously hinted at learning to listen and find common ground. A reason I want to get this post out is to add to the "Here's why identity doesn't have to be a stop sign."

Comment by hazard on Naryan Wong's Shortform · 2019-12-03T23:07:01.071Z · score: 2 (1 votes) · LW · GW

Sounds like an interesting crew. I'm also interested to hear how it goes!

Comment by hazard on Call for resources on the link between causation and ontology · 2019-12-03T02:28:32.468Z · score: 2 (1 votes) · LW · GW

On a book recommendation, the Book of Why (review here) gives a well explained intro to some modern (or maybe the cool kids have already moved on to something else) reasoning about differentiating causation and correlation.

Comment by hazard on Naryan Wong's Shortform · 2019-12-02T19:18:13.704Z · score: 2 (1 votes) · LW · GW

Those activities all sound fun and useful, and my gut also says that this will be foreign to a lot of people (i.e most of my friends / people I know at meetups) and it won't actually turn out that well (that's not at all me suggesting to avoid these ideas). Are the people at your meetup already used to these sorts of activities?

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T19:08:56.167Z · score: 2 (1 votes) · LW · GW

Aaaah, I see now. Just edited to what I think fits.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T19:03:44.530Z · score: 2 (1 votes) · LW · GW

Thought that is related to this general pattern, but not this example. Think of having an idea of an end skill that you're excited by (doing bayes updates irl, successfully implementing TAPs, being swayed by "solid logical arguments"). Also imagine not having a theory of change. I personally have sometimes not noticed that there is or could be an actual theory of how to move from A to B (often because I thought I should already be able to do that), and so would use the black box negative reinforcement strategy on myself.

Being in that place involved being stuck for a while and feeling bad about being stuck. Progress was only made when I managed to go "Oh. There are steps to get from A to B. I can't expect to already know them. I most focus on understanding this progression, and not on just punishing myself whenever I fail."

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T01:53:11.540Z · score: 3 (2 votes) · LW · GW

I like that because I can verb it while speaking.

"How much cattle could you fit in this lobby? You can answer directly or mist."

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T01:42:20.487Z · score: 2 (1 votes) · LW · GW

(Meta: the order wasn't important, thanks for thinking about that though)

The selection part is something else I was thinking about. One of my thoughts was your "If there's no way to train PhDs, they die out." And the other was me being a bit skeptical of how big the pool would be right this second if we adopted a really thick skin policy. Reflecting on that second point, I realize I'm drawing from my day to day distribution, and don't have thoughts about how thick skinned most LW people are or aren't.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T01:37:59.278Z · score: 2 (1 votes) · LW · GW

Yeah, I only talked about A after. Is the parenthetical rhetorical? If not I'm missing the thing you want to say.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T00:37:53.625Z · score: 2 (1 votes) · LW · GW

Or, if you're okay with being a bit less of a canonical robust agent and don't want to take on the costs of reliability, you could try to always match your work to your state. I'm thinking more of "mood" than "state" here. Be infinitely creative chaos.

Oooh, I don't know any blog post the cite, but Duncan mentioned at a CFAR workshop the idea of being a King or a Prophet. Both can be reliable and robust agents. The King does so by putting out Royal Decrees about what they will do, and then executing said plans. The Prophet gives you prophecies about what they will do in the future, and they come true. While you can count on both the decrees of the king and the prophecies of the prophet, the actions of the prophet are more unruly and chaotic, and don't seem to make as much sense as the king's.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T00:30:36.522Z · score: 2 (1 votes) · LW · GW

I still think this is genius.

Comment by hazard on Hazard's Shortform Feed · 2019-12-02T00:25:43.544Z · score: 3 (2 votes) · LW · GW

When I see this behavior, I worry that the rationalist is setting themselves up to have a blindspot when it comes themselves being "overly sensitive" to feedback. I worry about this because it's happened to me. Not with reactions to feedback but with other things. It's partially the failure mode of thinking that some state is beneath you, being upset and annoyed at others for being in that state, and this disdain making it hard to see when you engage in it.

K, I get that thinking a mistake is trivial doesn't automatically mean your doomed to secretly make it forever. Still, I worry.