Posts

Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z · score: 11 (5 votes)
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z · score: 15 (5 votes)
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z · score: 12 (5 votes)
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z · score: 31 (6 votes)
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z · score: 16 (6 votes)
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z · score: 8 (2 votes)
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z · score: 25 (11 votes)
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z · score: 21 (8 votes)
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z · score: 49 (15 votes)
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z · score: 16 (4 votes)
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z · score: 24 (9 votes)
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z · score: 32 (6 votes)
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z · score: 9 (3 votes)
The Case for The EA Hotel 2019-03-31T12:31:30.969Z · score: 66 (23 votes)
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z · score: 50 (15 votes)
What Vibing Feels Like 2019-03-11T20:10:30.017Z · score: 15 (25 votes)
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z · score: 100 (37 votes)
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z · score: 41 (18 votes)
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z · score: 141 (77 votes)
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z · score: 29 (7 votes)
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z · score: 43 (22 votes)
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z · score: 37 (11 votes)
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z · score: 14 (4 votes)

Comments

Comment by mr-hire on Toon Alfrink's sketchpad · 2020-01-17T00:59:06.862Z · score: 2 (1 votes) · LW · GW
Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa. .

Yeah, I think that if the brain in fact is mapped that way it would be meaningful to say you have a single goal.

Let me put it more simply: can achieving "self-determination" alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately

Maybe, it depends on how the brain is mapped. I know of at least a few psychology theories which would say things like avoiding pain and getting food are in the service of higher psychological needs. If you came to believe for instance that eating wouldn't actually lead to those higher goals, you would stop.

I think this is pretty unlikely. But again, I'm not sure.

Comment by mr-hire on Toon Alfrink's sketchpad · 2020-01-16T20:15:04.596Z · score: 2 (1 votes) · LW · GW
Prioritization of goals is not the same as goal unification.

It can be if the basic structure is "I need to get my basic needs taken care of so that I can work on my ultimate goal".

I think Kaj has a good link on experimental proof for Maslow's Hierarchy.

I also think that it wouldn't be a stretch to call Self-determination theory a "single goal" framework, that goal being "self-determination", which is a single goal made up of 3 seperate subgoals, which crucially, must be obtained together to create meaning (if they could be obtained seperately to create meaning, and people were OK with that, than I don't think it would be fair to categorize it as a single goal theory.

Comment by mr-hire on Toon Alfrink's sketchpad · 2020-01-16T17:49:15.457Z · score: 2 (1 votes) · LW · GW

It's interesting because Maslow's Hierarchy actually seems to point to the exact opposite idea to me. It seems to point to the idea that everything we do, even eating food, is in service of eventual self-actualization.

This is of course ignoring the fact that Maslow seems to basically be false experimentally.

Comment by mr-hire on tragedyofthecomments's Shortform · 2020-01-16T08:37:04.600Z · score: 9 (2 votes) · LW · GW

I actually think it's quite useful to make a statement like "Man, it would be great if the community would."

I think its' a strawman to translate this to "I want the all powerful entity that runs the community to..."

And I think it stems from an attitude that "You shouldn't complain about problems if you don't have real solutions."

Which seems wrong to me. People pointing out problems even when they don't have solutions is useful. People pointing out better equilibria even if they don't have plans to get there is also useful.

A lot of time this complaint seems to be hiding a deeper complaint which is "You pointing out problems without solutions makes me stressed and frustrated." - Which is OK to state, but also I get this sense of like "OK, but that's not really the person's problem who pointed it out, learn to handle your own emotional reactions."

Comment by mr-hire on tragedyofthecomments's Shortform · 2020-01-16T04:48:41.151Z · score: 2 (1 votes) · LW · GW

I often see people making statements that sound to me like . . . "The entity in charge of bay area rationality should enforce these norms." or "The entity in charge of bay area rationality is bad for allowing x to happen."

Can you give an example of someone who said this? I've never heard this, only "the bay area rationality community should", which is much more reasonable, if no easier to enforce.

Comment by mr-hire on Toon Alfrink's sketchpad · 2020-01-14T04:51:00.933Z · score: 2 (1 votes) · LW · GW

(Citation needed)

I think both of those assumptions are unlikely, but am skeptical of your certainty.

Comment by mr-hire on Are "superforecasters" a real phenomenon? · 2020-01-11T23:42:10.988Z · score: 4 (2 votes) · LW · GW

I'd consider this something like Superforecasting as a continuum rather than a category in that case, and 2% seems quite arbitrary as does calling them superforecasters.

Comment by mr-hire on Are "superforecasters" a real phenomenon? · 2020-01-09T14:06:04.383Z · score: 4 (2 votes) · LW · GW

If there isn't a discontinuity, then how is there a clear group that outperformed?

Comment by mr-hire on ozziegooen's Shortform · 2020-01-08T20:25:53.813Z · score: 5 (3 votes) · LW · GW
How? By making a Fermi model or similar.

I'm fairly skeptical of this. From a conceptual perspective, we expect the tails to be dominated by unknown unknowns and black swans. Fermi estimates and other modelling tools are much better at estimating scenarios that we expect. Whereas, if we find ourselves in the extreme tails, its often because of events or factors that we failed to model.

Comment by mr-hire on ozziegooen's Shortform · 2020-01-08T18:14:46.272Z · score: 2 (1 votes) · LW · GW
Users of that forecasting system may care about this tail. They may be willing to pay for improvements in the aggregate distributional forecast such that it better models an enlightened ideal. If it were quickly realized that 99.99% of the distribution was uniform, then any subsidies for information should go to those that did a good job improving the 0.001% tail. It’s possible that some pretty big changes to this tail could be figured out.

I'm really interested in this type of scheme because it would also solve a big problem in futarchy and futarchy-like setups that use prediction polling, namely, the inability to score conditional counterfactuals (which is most of the forecasting you'll be doing in Futarchy-like setup).

One thing you could do instead of scoring people against expert assesments is also potentially score people against the final aggregate and extremized distribution.

One issue with any framework like this is that general calibration may be very different than calibration at the tails. Whatever scoring rule you're using to determine calibration of experts or aggregate scoring has the same issue that long tail events rarely happen.

Another solution to this problem (although it doesn't solve the counterfactual conditional problem) is to create tailored scoring rules that provide extra rewards for events at the tails. If an event at the tails is a million times less likely to happen, but you care about it equally to events at the center, then provide a million times reward for accuracy near the tail in the event it happens. Prior work on tailored scoring rules for different utility functions here: https://www.evernote.com/l/AAhVczys0ddF3qbfGk_s4KLweJm0kUloG7k/


Comment by mr-hire on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-07T01:17:47.149Z · score: 8 (4 votes) · LW · GW

I don't know where else to say a thing I haven't said, so I'll say it here. I really appreciate your passion for truth and outing deception, Zack.

Comment by mr-hire on Raemon's Scratchpad · 2020-01-06T17:19:24.902Z · score: 2 (1 votes) · LW · GW

Mostly a concerted effort on my part to find people who were good at these things, talk to them, and inhabit their positions with empathy. A lot of it was finding my own aesthetic analogies for what they were doing, then checking in with them to see ways the analogy didn't work, and tweaking as needed.

Comment by mr-hire on Raemon's Scratchpad · 2020-01-05T23:28:06.161Z · score: 6 (3 votes) · LW · GW

It used to be really hard for me to see things as ugly, but I was able to get that skill.

Prior to that, it used to be really hard for me to judge people, but I was also able to learn that skill.

Comment by mr-hire on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T05:13:57.139Z · score: 4 (2 votes) · LW · GW

Yes. And schelling points refer to the latter (IE, coordination that was done in the long time past that creates common knowledge), and not the former (coordination around this specific decision point)

Comment by mr-hire on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T16:46:59.033Z · score: 2 (1 votes) · LW · GW

This comment of mine (also available on my user page) seems to have disappeared completely in the move. I'm unsure if other comments and subthreads dissappeared.

https://www.lesswrong.com/posts/pC74aJyCRgns6atzu/meta-discussion-from-circling-as-cousin-to-rationality

Comment by mr-hire on What are the best self-help book summaries you've read? · 2020-01-03T19:15:16.397Z · score: 2 (3 votes) · LW · GW

Books in or close to the self-help domain seem reliably to be horribly padded, excessively anecdote-laden, and generally somewhat mawkish.

Just want to say I strongly disagree with this. Narrative and emotional arc are in general useful for most books, but Especially self help books which are trying to make immediate change to your actions.

I think there's a skill to reading with your system 1 such that you can update your aliefs from anecdotes and sentimentality, and would recommend learning that skill rather than skipping those vital parts of the books.

(It may be that there's an even greater skill of just reading a bare bones summary then updating your aliefs, but I haven't seen it)

Comment by mr-hire on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T03:59:34.680Z · score: 7 (3 votes) · LW · GW

If you think of a Schelling point as the point we'd coordinate on with no further exchange of information,

The problem I have with this definition is that it makes Schelling point a fairly useless term. I think of Schelling points as the the things that result without specific coordination, but only common background knowledge.

But arriving at that point didn't involve zero coordination. There's a bunch of information we all have to know, and there are a bunch of specific reasons why that would be the place to meet. We all had to know that Grand Central exists. That it's prominent. That it's a convenient point for getting to lots of other parts of New York City. And people certainly had to coordinate to build Grand Central in the first place.

Exactly this type of background knowledge that's separate from the game/decision being made.

Comment by mr-hire on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T03:54:18.868Z · score: 2 (1 votes) · LW · GW

I think Schelling's point about schelling points was about cultural background in the absence of coordination.

Comment by mr-hire on 2020's Prediction Thread · 2020-01-03T02:02:10.366Z · score: 2 (1 votes) · LW · GW

Note also that its' impossible to determine "a majority of predictions to be oveconfident" as a literal statement. A prediction is only right or wrong, overconfidence can only be looked at in terms of the aggregate (which is what I meant in the original post).

Comment by mr-hire on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T01:59:02.164Z · score: 5 (5 votes) · LW · GW

I don't like that schelling point is used to mean "coordination point" here when it's supposed to mean "common point without coordination"

Comment by mr-hire on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T00:43:31.516Z · score: 12 (6 votes) · LW · GW
Maybe a term for the attitude / rhetorical move that I find frustrating would be: "weaponized bafflement". Said often expresses that he has no idea what someone could mean by something, or is totally shocked that someone could think two things are similar (e.g. grouping both reading the sequences and attending CFAR as rationality training), when to me it seems pretty easy to at least generate some hypotheses about what they might mean or why they might think something.

To me this particular move is part of a broader pattern used by Said and a few other common posters on here of using the Socratic method to make their point, which is frequently time consuming, annoying to answer, and IMO a bad tool for finding the truth.

Whenever I detect someone using the Socratic method in the comment section of my posts I ask them to more directly make their point, and in fact may add it to my author commenting guidelines.

Comment by mr-hire on Don't Double-Crux With Suicide Rock · 2020-01-02T16:20:04.660Z · score: 4 (2 votes) · LW · GW

I think being smart is only very small evidence for being rational (especially globally rational, as Zach is assuming here, rather than locally rational).

I think most of the evidence towards being rational of understanding philosophical evidence is screened off by being smart (which again, is a very very weak correlation already).

Comment by mr-hire on 2020's Prediction Thread · 2020-01-02T05:34:18.111Z · score: 4 (2 votes) · LW · GW

I agree, what matters is calibration and resolution.

If you're talking about an individual s prediction that is, I'm unconvinced that group calibration would be a useful epistemic yardstick in this instance.

Comment by mr-hire on Benito's Shortform Feed · 2020-01-01T23:03:31.859Z · score: 6 (3 votes) · LW · GW

There's a similar free game for Android and iOs called space team that I highly recommend.

Comment by mr-hire on 2020's Prediction Thread · 2020-01-01T22:54:41.844Z · score: 7 (4 votes) · LW · GW

I predict that like 2010, a majority of these predictions will be overconfident.

Comment by mr-hire on ozziegooen's Shortform · 2019-12-29T16:13:49.611Z · score: 6 (3 votes) · LW · GW

A think I want:

A recommendation engine that works based on listing the tropes you enjoy.

Comment by mr-hire on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-29T00:38:35.364Z · score: 3 (4 votes) · LW · GW

Importance * Neglectedness - Reputation hit might be more accurate.

Comment by mr-hire on ozziegooen's Shortform · 2019-12-28T23:13:34.538Z · score: 6 (3 votes) · LW · GW

In a recent thread about changing the name of Solstice to Solstice Advent, Oliver Habryka estimated it would cost at least $100,000 to make that happen. This seems like a reasonable estimate to me, and a good lower bound for how much value you could get from a name change to make it worth it

The idea of lowering this cost is quite appealing, but I'm not sure how to make a significant difference there.

I think it's also worth thinking about the counterfactual cost of discouraging naming things.

As an example, here's a post with an important concept that hasn't really spread because it doesn't have a snappy name: https://www.lesswrong.com/posts/K4eDzqS2rbcBDsCLZ/unrolling-social-metacognition-three-levels-of-meta-are-not

Comment by mr-hire on ozziegooen's Shortform · 2019-12-28T17:19:52.862Z · score: 2 (1 votes) · LW · GW
One of my issues with LessWrong is the naming system. There's by now quite a bit of terminology to understand; the LessWrong wiki seems useful here. But there's no strong process from what I understand. People suggest names in their posts, these either become popular or don't. There's rarely any refactoring.

One of the issues with this in both an academic and LW context is that changing the name of something in a single source of truth codebase is much cheaper than changing the name of something in a community. The more popular an idea, the more cost goes up to change the name. Similarly, when you're working with a single organization, creating a process that everyone follows is relatively cheap compared to a loosely tied together community with various blogs, individuals, and organizations coining their own terms.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-27T14:25:07.433Z · score: 2 (1 votes) · LW · GW

I think he's bad at this.

You can see this in some aspects of his companies.

High micromanagement. High turnover. Disgruntled former employees.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-26T23:42:58.041Z · score: 2 (1 votes) · LW · GW

Leadership (as for instance leadership retreats are trying to teach it) is the intersection between management and strategy.

Another way to put it, its' the discipline of getting people to do what's best for your organization.

Comment by mr-hire on Dony's Shortform Feed · 2019-12-25T19:04:57.462Z · score: 2 (1 votes) · LW · GW

For the next ten years:

Something about noticing trauma and healing with love.

Comment by mr-hire on Dony's Shortform Feed · 2019-12-25T18:16:56.218Z · score: 5 (3 votes) · LW · GW

When I notice something that's in the way of achieving my goals, look up ways other people have solved it.

Oftentimes using the 3 books technique: https://www.lesswrong.com/posts/oPEWyxJjRo4oKHzMu/the-3-books-technique-for-learning-a-new-skilll

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-25T00:56:44.436Z · score: 3 (2 votes) · LW · GW
Well, there are a lot of things out there. Why did you promote these ones?

I don't think these ones in particular, I listed these as some of the most popular ones.

Granted, this is not a systematic investigation of the space of personal development stuff, but that seems less promising to me than people thinking about particular problems (often personal problems, or problems that they've observed in the rationality and EA communities) and investigating know solutions or attempted solutions that relate to those problems.

I personally have gotten a lot out of a hybrid approach, where I find a problem, investigate the best relevant self-helpy solutions, then go down the rabbit hole of finding all the other things created by that person, and all of their sources, influences, and collaborators.

I suspect someone who's job it is to do this could have a similar function as the "living library" role at MIRI (I'm not sure how exactly that worked for them though)

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-25T00:14:00.238Z · score: 2 (1 votes) · LW · GW

I claim that Elon has done this despite his leadership abilities.

I think that it's possible to be a bad leader but an effective CEO.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-23T14:43:08.751Z · score: 2 (1 votes) · LW · GW

Yes, I think there's a distinction between the semantic content of "My intuition is that Design A is better than Design B" referring to the semantic content or how the intuition "caches out" in terms of decisions. This contrast with the felt sense, which always seems to refer to what the intuition is like "from the inside," for example a sense of unease when looking at Design A, and rightness when looking at Design B.

I feel like using the word "intuition" can refer to both the latter and the former, whereas when I say "felt sense" it always refers to the latter.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-23T14:25:58.091Z · score: 2 (1 votes) · LW · GW

Yes, although I expect the utility of circling over other methods to be dependent on the degree to which the ITT is based on intuitions.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-23T03:08:50.225Z · score: 5 (3 votes) · LW · GW

I suspect the CFARians have more delicious cake for you, as I haven't put that much time into circling, and the related connection skills I worked on more than a decade ago and have atrophied since.

Things I remember:

  • much quicker connection with people
  • there was a few things like exercise that I wasn't passionate about but wanted to be. After talking with people who were passionate I was able to become passionate myself for those things
  • I was able to more quickly learn social cognitive strategies by interacting with others who had them.
Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-23T02:54:44.734Z · score: 2 (1 votes) · LW · GW

Also in particular - felt sense refers to the qualia related to intuitions, rather than the intuitions themselves.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-23T00:11:41.648Z · score: 5 (3 votes) · LW · GW

I think that Gendlin thinks all pre-verbal intuitions are represented with physical sensations.

I don't agree with him but still use the felt-sense language in these parts because rationalists seem to know what I'm talking about.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-22T23:32:06.065Z · score: 6 (4 votes) · LW · GW

The wikipedia article for Gendlin's focusing has a section trying to describe felt sense, taking out the specific part about "the body", the first part says:

" Gendlin gave the name "felt sense" to the unclear, pre-verbal sense of "something"—the inner knowledge or awareness that has not been consciously thought or verbalized",

Which is fairly close to my use of it here.

That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions?

One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions.

Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized”

I do believe CFAR at one point was teaching deliberate practice and calling it "turbocharged training". However, if one is really interested in intiution and thinks its' useful, the next obvious step is to ask "ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?"

Your questions imply that these are the same thing. But (even in the hypothetical case where there is such a thing as the latter) they are not!

Good catch, this assumes that my simplified model of how intuitions work is at least partly correct. If the felt sense you get from a particular situation doesn't relate to intuition, or if its' impossible for one human being to get better at feeling what another is feeling, than these are not equivalent. I happen to think both are true.

Comment by mr-hire on Decoupling vs Contextualising Norms · 2019-12-22T19:23:23.094Z · score: 4 (2 votes) · LW · GW

I think this is valid.

Comment by mr-hire on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T18:48:30.780Z · score: 4 (2 votes) · LW · GW

Strategic voting for me = trying to think how much value your vote has relative to the outcome you're trying to achieve. I don't see for instance looking at how many people have already voted on something as "gaming the rules", it just changes the value of a marginal vote of my own. I expect most people to think like that because QV is already making you think about the marginal value of another vote.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-22T17:26:06.701Z · score: 2 (1 votes) · LW · GW

Yes. If you're wondering, I basically updated more towards #1.

I wouldn't call the conclusion unwarranted by the way, it's a perfectly valid interpretation of seeing this sort of stance from you, it was simply uninformed.

Comment by mr-hire on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T17:24:43.846Z · score: 2 (1 votes) · LW · GW

One of the things I really love about pairwise bounded QV is that it actually disincentivizes even unconscious collusion. In a democractic republic like the US with a traditional voting scheme, I'm incentivized to find issues that agree with others so that I have more voting power.

In a pairwise bounded QV voting scheme, I'm actually incentivized to find issues that I care about that are neglected by others, and vote on those (as my votes will literally be worth more).

Of course, one of the biggest issues with pairwise bounded QV is that it makes it much harder to figure out how much an individual vote will be worth on any given issue, as it depends on how correlated my votes are with others who vote on the same issues.


Comment by mr-hire on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T17:05:30.624Z · score: 9 (3 votes) · LW · GW

There's a variant on quadratic funding called pairwise quadratic funding that aims to make naive collusion much less useful: https://ethresear.ch/t/pairwise-coordination-subsidies-a-new-quadratic-funding-design/5553

AFAICT it hasn't been adapted yet to quadratic voting, but I'd love to see LW be the first ones to do so.

Comment by mr-hire on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T17:01:22.237Z · score: 4 (2 votes) · LW · GW

That's interesting, because I expect most people to vote strategically when using QV. The structure of QV heavily encourages thinking about the value of each marginal vote.

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-22T16:51:46.478Z · score: 7 (2 votes) · LW · GW
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”?

This requires some model of how intuitions work. One model I like to use is to think about "intuition" is like a felt sense or aesthetic that relates to hundreds of little associations you're picking up from a particular situation.

If i'm quickly able to in my mind, get a sense for what it feels like for you (i.e get that same felt sense or aesthetic feel when looking at what you're looking at), and use circling like tools to be able to tease out which parts of the environment most contribute to that aesthetic feel, I can quickly create similar associations in my own mind and thus develop similar intuitions.

f so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”?

Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case.

Can you say more about how you came to realize this?

I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I'm not sure how CFAR recognized it.

However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.

I think this is a coherent stance if you think the general "learning intuitions" skill is impossible. But imagine if it weren't, would you agree that training it would be useful?

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-22T16:38:50.601Z · score: 4 (2 votes) · LW · GW

I can't, but here's an example from this same thread:

https://www.lesswrong.com/posts/96N8BT9tJvybLbn5z/we-run-the-center-for-applied-rationality-ama#HgQCE8aHctKjYEWHP

In this comment, you explicitly understood and agreed with the material that was teaching explicit knowledge (philosophy), but objected to the material designed to teach intuitions (circling).

Comment by mr-hire on We run the Center for Applied Rationality, AMA · 2019-12-22T16:30:26.433Z · score: 4 (2 votes) · LW · GW

It seems like paradigm academy is trying to do something like create an Elon Musk Factory:

http://paradigmacademy.co/

But then again, so is Y-combinator, and every other incubator, as well as pretty much every leadership retreat (ok maybe not the leadership retreats, because Elon Musk is a terrible leader, but they're trying to do something like create a factory for what people imagine Elon Musk to be like). It seems like a very competitive space to create an Elon Musk factory, because its' so economically valuable.