Comment by mr-hire on Clothing For Men · 2019-01-17T09:35:14.113Z · score: -3 (5 votes) · LW · GW

I love that you wrote this. I think LW needs more instrumental advice like this.

In this particular instance I think you're mixing the idea of dressing cool and dressing conservative.

These are not the same thing, although both are better than dressing badly.

Comment by mr-hire on Buy shares in a megaproject · 2019-01-16T18:35:47.879Z · score: 8 (5 votes) · LW · GW

It actually does come up frequently with companies that do lots of high liability projects. Movie studios for instance will create companies for each production to limit their risk profile if something goes wrong.

Comment by mr-hire on Buy shares in a megaproject · 2019-01-16T16:42:51.760Z · score: 1 (1 votes) · LW · GW

This is a cool idea. A fun test of this might be to create a few markets for existing mega-projects on an open prediction market like Augur, and see if you can drum up any interest for people actually investing in their outcomes.

A Framework for Internal Debugging

2019-01-16T16:04:16.478Z · score: 17 (8 votes)
Comment by mr-hire on Is there a.. more exact.. way of scoring a predictor's calibration? · 2019-01-16T12:05:01.726Z · score: 14 (4 votes) · LW · GW

There's a whole subfield on "scoring rules", which try to more exactly measure people's calibration and resolution.

There's scoring rules that incorporate priors, scoring rules that incorporate information value to the question asker, and scoring rules that incorporate sensitivity to distance (if you're close to the answer, you get more points). There's a class of "strictly proper" scoring rules that incentivize people to give their true probability. I did a deep dive into scoring rules when writing the Verity whitepaper. Here are some of the more interesting/useful research articles on scoring rules:

Order-Sensitivity and Equivariance of Scoring Functions - PDF - arxiv.org: https://www.evernote.com/l/AAhfW6RTrudA9oTFtd-vY7lRj0QlGTNp4bI/

Tailored Scoring Rules for Probabilities: https://www.evernote.com/l/AAhVczys0ddF3qbfGk_s4KLweJm0kUloG7k/

Scoring Rules, Generalized Entropy, and Utility Maximization: https://www.evernote.com/l/AAh2qdmMLUxA97YjWXhwQLnm0Ro72RuJvcc/

The Wisdom of Competitive Crowds: https://www.evernote.com/l/AAhPz9MMSOJMcK5wrr8mQGNQtSOvEeKbdzc/

A formula for incorporating weights into scoring rules: https://www.evernote.com/l/AAgWghOuiUtIe76PQsXwFSPKxGv-VkzH7l8/

Sensitivity to Distance and Baseline Distributions in Forecast Evaluation: https://www.evernote.com/l/AAg7aZg9BjRDLYQ2vpGow-qqN9Q5XY-hvqE/

Comment by mr-hire on The 3 Books Technique for Learning a New Skilll · 2019-01-10T12:28:47.498Z · score: 1 (1 votes) · LW · GW

This may be because of my particular learning style. I tend to get most of my deep learning from the actual application of the skill, which is based on the how resource. I use the what resource in a very surface way, just getting particular facts or techniques when I'm stuck. However, I agree that What books tend to cover material in a deeper way

The 3 Books Technique for Learning a New Skilll

2019-01-09T12:45:19.294Z · score: 77 (28 votes)
Comment by mr-hire on I am Sailor Vulcan! · 2018-08-24T23:44:32.560Z · score: 10 (3 votes) · LW · GW

Something about the way you wrote this made me instantly like you.

Comment by mr-hire on Strategies of Personal Growth · 2018-07-28T19:20:21.102Z · score: 18 (6 votes) · LW · GW

As a counter to this, I got very very far with this sort of self-improvement for a very long time (though I think LW was very bad at teaching it, and I mostly got it from other sources.) I've recently focused on the alignment based models as I was starting to get to the point of diminishing returns with the other way, but I did get a lot out of the previous paradigm

I think the alignment based models are very very powerful, and I also think that the overriding the elephant models are quite powerful and get too much of a bad rap.

Comment by mr-hire on Meetup Cookbook · 2018-07-14T23:41:27.745Z · score: 3 (2 votes) · LW · GW

Thanks! This looks really useful.

Comment by mr-hire on Internalizing Internal Double Crux · 2018-06-07T20:26:01.380Z · score: 6 (2 votes) · LW · GW

It sounds like your naming process is actually focusing. For me, the names don't matter as much, and I just have a conversation involving focusing to figure out what the parts want.

Comment by mr-hire on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2018-05-29T04:01:26.912Z · score: 4 (1 votes) · LW · GW

Maybe, or maybe there's a different context entirely. As Said says, there really wasn't much context to this at all.

Comment by mr-hire on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2018-05-29T02:56:32.747Z · score: 10 (2 votes) · LW · GW

I was going to make this same comment. Without context, seems like a lot of fixing something that ain't broke.

Comment by mr-hire on Critch on career advice for junior AI-x-risk-concerned researchers · 2018-05-14T02:26:30.577Z · score: 13 (3 votes) · LW · GW

I have a model that there's something like a Pareto distribution where 20% of the people in a field contribute to 80% of the Actually Important advances, and of those advances about 80% of those people are a further 20% split of people who are deliberately and strategically choosing fields such that they can rationally expect to make advances. This implies that for instance in climate change, there will be ~4% of people who have actually done a fermi estimate of their impact on climate change that will contribute ~64% of the relevant advances in the field.

One thing you can say is that this is awful, and you really would like to have a field without this ridiculous distribution, and try to tell people to really wait to go into this field so they can contribute to Actually Important things. But it seems like there's a lot of countervailing forces preventing this, including the status incentive of saying "this is a field only for people who work on "Actually Important things." If your timelines are really short, you might not be worried about this, but it does seem like something to worry about over a decade or so of putting this message out in a specific field.

The other way to handle it would be to expect the Pareto distribution to happen because most people just aren't strategic about their careers, and do rationalize. The goal in that case is to just try and grow the field as much as possible, and know that some small percentage of the people who go into the field will be strategic thinkers who will contribute quite a bit. Not only does this strategy seem to actually capture the pattern of fields that have grown and made significant advances in solving problems, but it also has the benefit of getting the additional ~36% of Actually Important advances that come from people who aren't strategically trying to create impact.

Comment by mr-hire on Introducing the Longevity Research Institute · 2018-05-08T21:27:45.359Z · score: 13 (3 votes) · LW · GW

The announcement post for RAISE was specifically removed from the front page, with Oliver stating the reason was that there was an explicit LW policy to not allow organization announcements on the front page. Can we perhaps get some clarity on this policy?

Comment by mr-hire on [deleted post] 2018-05-06T22:13:40.553Z

I looked over your posts and I like them. If the question in your title were personally directed at me, my answer would be no.

Comment by mr-hire on Hypotheticals: The Direct Application Fallacy · 2018-05-06T21:59:17.766Z · score: 14 (4 votes) · LW · GW

See also: Please Don't Fight The Hypothetical, plus the excellent comment from David Gerard that explains why people exhibit this behavior, and why explaining to them really nicely that this will help them learn might be seen as disingenuous.

Comment by mr-hire on Advocating for factual advocacy · 2018-05-06T21:48:46.807Z · score: 9 (2 votes) · LW · GW

I think one thing this post fails to take into account is the difference between endorsed, professed, conscious beliefs, vs. unconscious aliefs. I suspect the "morals as a convenience" theory is actually talking about the latter type of belief, while the "factual advocacy" approach is more focused on the former.

While it is true that factual advocacy can affect unconscious aliefs, there are much more effective ways to do so, many pioneered and testing in the field of marketing, which in many ways can be seen as a study of how to effect people's aliefs such that they change their actions.

Comment by mr-hire on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-06T21:37:23.069Z · score: 4 (1 votes) · LW · GW

I think the central question that Duncan is getting at in the article is where the line should be. Society is putting it more towards micro, Duncan thinks it's swung to far and wants to be towards macro. But it's clear that just saying "have a line" doesn't help with the dilemma very much (unless people don't have personal boundaries, in which case saying "Have a line" is definitely helpful advice).

Comment by mr-hire on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-04T22:03:44.897Z · score: 9 (2 votes) · LW · GW

Reify it?

Comment by mr-hire on Ten Commandments for Aspiring Superforecasters · 2018-04-28T19:18:52.424Z · score: 4 (1 votes) · LW · GW

I'd be interested in the parts that you felt most improved your calibration. Personally most of what I got from the book was about how effective forecasting tournaments were, what their limits where, and how to run them effectively. I got very little in terms of better calibration.

Comment by mr-hire on The Leading and Trailing Edges of Development · 2018-04-27T17:07:02.716Z · score: 4 (1 votes) · LW · GW

Yes exactly. I had this very bottleneck around relationships and communication at one point in my development.

Comment by mr-hire on The Leading and Trailing Edges of Development · 2018-04-27T07:02:47.647Z · score: 9 (2 votes) · LW · GW

I quite like this idea - it's easy to focus on just a trailing or leading edge, and alternate between thinking I'm kicking ass and failing miserably. Just realizing that my growth is in between this large range feels like it could be really liberating.

Comment by mr-hire on The Leading and Trailing Edges of Development · 2018-04-27T07:00:23.428Z · score: 4 (1 votes) · LW · GW

There's some vague intuitiony part of me that says the trailing edge is quite likely to be a bottleneck, and the leading edge is quite likely to be a force multiplier.

Comment by mr-hire on Models of human relationships - tools to understand people · 2018-04-25T19:37:41.356Z · score: 4 (1 votes) · LW · GW

This is the best post on interpersonal dynamics I've seen in quite a while. Thank you :).

Comment by mr-hire on Ten Commandments for Aspiring Superforecasters · 2018-04-25T19:29:12.070Z · score: 16 (5 votes) · LW · GW

This appendix struck me as exceedingly useless when I first encountered it. Most of the suggestions follow the pattern of "Find the right balance between two extremes", but don't give enough context to figure out where that balance is.

It's like he talked to the experts, got the fact that a lot of what they were doing was tacit knowledge that gave them a feel for this sort of thing, but didn't do any of the modeling work to then pull out the models that actually underlie the tacit knowledge.

I'd be curious if anyone has improved their calibration using these guidelines? Personally, I got much more mileage out of How to Measure Anything's 5 calibration strategies.

Symbiosis - An Intentional Community For Radical Self-Improvement

2018-04-22T23:15:06.832Z · score: 29 (7 votes)
Comment by mr-hire on How Going Meta Can Level Up Your Career · 2018-04-18T05:34:24.407Z · score: 3 (2 votes) · LW · GW

Seems like a likely failure mode if you skip the "master your current level" step in abstraction.

Comment by mr-hire on How Going Meta Can Level Up Your Career · 2018-04-17T08:59:41.738Z · score: 4 (1 votes) · LW · GW

Correct :)

How Going Meta Can Level Up Your Career

2018-04-14T02:13:02.380Z · score: 40 (19 votes)
Comment by mr-hire on Arbital postmortem · 2018-02-05T22:48:41.365Z · score: 4 (1 votes) · LW · GW

And calling gwern a non-mathematician almost feels incorrect... even though it's correct :).

Comment by mr-hire on Sources of intuitions and data on AGI · 2018-02-01T22:08:35.343Z · score: 27 (7 votes) · LW · GW

Meta:

Is there still not a name for "Paul Christiano Style Ampliciation?" Can't we come up with a name like "Anthropomorphic Simulators" or something so that this can become a legit thing people talk about instead of always seeming the hobby horse of one dude?

Comment by mr-hire on Demon Threads · 2018-01-13T23:12:51.824Z · score: 10 (3 votes) · LW · GW

Meta: There's a really cool point in here about HOW TO EXORCISE DEMONS FROM THREADS, without ending the thread. But people may miss it because the bolded text and first few sentences mostly seem to be about technical ideas on commenting. Recommend reading this comment if you skimmed it previously.

Non-Meta: I too have noticed a certain tone that's factual, friendly, and non-combative that can seem to take the wind out of demon threads because it somehow disables everyone's defensiveness centers. I think this is probably the best solution to demon threads, and also reflectively useful in that if people get great at this tone, demon threads are less likely to happen in the first place.

Comment by mr-hire on Demon Threads · 2018-01-12T23:20:58.747Z · score: 3 (1 votes) · LW · GW

I think I have a TAP that's something like, "Notice I'm in a Demon Thread, bow out of conversation." The way I bow out is something like "It doesn't feel to me like there's anything useful coming out of this discussion, so I won't be replying further".

This seems potentially less useful than the norm of "take it to private". But it does seem to reliably end the demon threads. Not sure if it also has a chilling effect.

Comment by mr-hire on The O'Brien Technique · 2018-01-09T23:15:36.358Z · score: 3 (1 votes) · LW · GW

Ahh I see, so the important thing I was missing is something like "This is about disentangeling social reality from predictive reality?"

Comment by mr-hire on The O'Brien Technique · 2018-01-09T10:33:10.222Z · score: 3 (1 votes) · LW · GW

I wish there was an example here. I think the algorithm you're pointing to is something like:

  1. Find areas where your endorsed beliefs and aliefs diverge.
  2. Mentally contrast both in your head, and feel the structural tension and dissonance this creates. (not sure if you're bouncing between both here, or overlaying them on top of one another and then viewing simultaneously).
  3. See what both of them would have predicted in the past, and notice which one is more true. Grok this so that whichever one is wrong updates.
  4. Follow the beliefs along their belief chains/regulator chains, find further beliefs, and repeat steps 1-3.

Is that roughly what you're trying to describe? Am I emphasizing the proper parts?

I'll note that one thing I love about step #3 is that it's asymetric to true beliefs. Other belief change techniques I know like the Lefkoe belief process or reframing instead ask you to imagine how your beliefs could be wrong, which is very effective for getting rid of them but says nothing about their validity.

Video: The Phenomenology of Intentions

2018-01-09T03:40:45.427Z · score: 32 (8 votes)
Comment by mr-hire on [deleted post] 2018-01-06T02:05:57.613Z

Maybe black is the color of hormetic systems, green the color of selective systems, and white the color of robust systems. I suppose that would make systems based on effectuative principles red. Can't think of how a system would be based on blue, but I'm sure those exist as well.

Comment by mr-hire on Superhuman Meta Process · 2018-01-04T19:46:56.126Z · score: 3 (1 votes) · LW · GW

I really like the general idea of this. Would have loved to see an example with for instance, making decisions over a second, day, week, month, and year, to get a better concrete idea of how this actually cashes out in terms of decision making, planning, and motivational processes.

Video - Subject - Object Shifts and How to Have Them

2018-01-04T02:11:22.142Z · score: 11 (4 votes)
Comment by mr-hire on How I accidentally discovered the pill to enlightenment but I wouldn’t recommend it. · 2018-01-03T03:39:44.622Z · score: 8 (2 votes) · LW · GW

I'm not Elo, but, as a content creator, there's really no replacement for owning my own content publishing platform (rather than being tied to someone else's). I can edit post formatting as I see fit, change old posts, create arbitrary complicated/interactive link structures, and add signup forms, things I'm selling, or other forms of reputation and sales monetization. I also never have to worry about a platform suddenly locking me out or deleting old posts, or otherwise doing things in their interest and not mine.

Comment by mr-hire on Writing Down Conversations · 2017-12-30T19:24:07.207Z · score: 3 (1 votes) · LW · GW

There's "Rev", which transcribes at 1 dollar a minute. I've used this in the past to write blog posts from conversations, but it doesn't seem worth it for something like an hour long meetup.

Comment by mr-hire on Why did everything take so long? · 2017-12-30T02:00:37.521Z · score: 4 (2 votes) · LW · GW

One of the biggest leaps I made in trying to understand innovation was:

A. Realizing that new technologies and ideas evolve from misunderstood to understood, and only in the process become truly world changing.

B. Realizing that many (most?) new technologies and ideas rely on old ideas and technologies being understood to the extent that they are commodities or utilites.

Note that A is actually dependent on the speed of communication, so our current intuitions about how fast A happens are orders of magnitude off base.

Two of my favorite mental models to help make sense of how this works:

The Carlotta Perez Framework

Wardley Mapping

Comment by mr-hire on Hero Licensing · 2017-11-28T15:11:05.325Z · score: 6 (2 votes) · LW · GW

Peter Thiel says that status blindness is a common trait in succesful startup founders. We should trust him because Peter Thiel is high status.

Comment by mr-hire on Hero Licensing · 2017-11-21T20:11:05.883Z · score: 10 (4 votes) · LW · GW

No, what I got was "modest epistimology doesn't make any sense in these precise situations when civilizational inadequacy applies". That's an incredibly hedgehoggy way to look at modest epistemology.

A more foxxy way would be something like "apply the frames of both modest and immodest epistemologies, as well as the frame of civilizational inadequacy, then using data from all of these frames, make your decision.

Comment by mr-hire on Different Worlds · 2017-11-17T01:40:41.067Z · score: 3 (1 votes) · LW · GW

Are you claiming that people were picking up on something other than the signals your body was giving off, (gait and posture would be the most obvious studied signals here) or simply that people don't often think about things like gait and posture when they talk about body language?

Comment by mr-hire on Hero Licensing · 2017-11-16T22:06:32.196Z · score: 24 (11 votes) · LW · GW

You spend a lot of time arguing "immodest" over "modest" epistemology here, when the thing that really gets me is the hedgehog vs. fox epistemology. I kept wanting to say yes, Pat's right that outside view should lower the probability in certain situations, and yes, Eliezer's right that the inside view should raise his probability of beating the odds in certain situations, and that they should look at this individual situation to see how much each should apply.

I wanted to say that YES, Eliezer's view that outside view arguments aren't useful to think about when actually working on a project means that YES, you should compartmentalize and not think about them when working on a project, but doesn't mean it's not useful to think about at other times.

To me you're strawmanning the modest epistemology by making it "hedgehog modest" instead of "fox modest". This was a persistent problem I had with nearly every chapter in the book, and not just this bonus chapter.

Comment by mr-hire on Open thread, November 13 - November 20, 2017 · 2017-11-14T02:14:48.943Z · score: 3 (1 votes) · LW · GW

Re your second paragraph, this may be a selection effect for the smart and competent people you know? Many smart and competent people I know think that trying to describe and understand subjective experience is the good stuff.

Comment by mr-hire on [deleted post] 2017-10-19T06:05:29.180Z

The way I learned it at CFAR was a four step process of notice the felt sense, describe the felt sense, check if that's correct, and then go back to describing if it doesn't. Almost exactly what you stated above.

Comment by mr-hire on Different Worlds · 2017-10-05T20:04:58.060Z · score: 9 (4 votes) · LW · GW

No one seems to be mentioning here that subcommunications are a real, studied thing that encompass things like body language, facial expression, voice tonality, and eye contact. The facial action coding system is one example of how scientists have begun to codify these types of auras. Less studied, but in my experience still very valid, is the fact that one can change their subcommunications either through deliberate practice, or through changing internal beliefs and feelings.

Comment by mr-hire on Writing That Provokes Comments · 2017-10-05T19:59:30.373Z · score: 15 (6 votes) · LW · GW

This is similar to the idea of an MVP in the startup world. It makes me think of a sentiment from Ryan Holiday's book on writing perrenial sellers: Ideas should become comments, comments should become conversations, conversations should become blog posts, blog posts should become books. Test your ideas at every stage to make sure you're writing something that will have an impact.

Comment by mr-hire on Slack · 2017-10-03T20:54:21.573Z · score: 5 (4 votes) · LW · GW

The opposite of slack would be... deliberate constraints? Which I find very valuable. In addition to the value of deliberate constraints- Parkinson's law is a real thing, as are search costs, analysis paralysis, eustress (distress's motivating cousin). I find when I'm structured and extremely busy, I'm productive and happy, but when I have slack, I'm not.

Could this be a case of Reversing the advice you get?