Posts

Noticing and Overcoming Bias 2021-03-06T21:06:36.495Z
When you already know the answer - Using your Inner Simulator 2021-02-23T17:58:29.336Z
Overcoming Helplessness 2021-02-22T15:03:45.869Z
The World is Full of Wasted Motion 2021-02-02T19:02:48.724Z
Retrospective on Teaching Rationality Workshops 2021-01-03T17:15:00.479Z
Asking For Help 2020-12-27T11:32:32.462Z
On Reflection 2020-12-14T16:14:29.818Z
My case for starting blogging 2020-10-22T17:43:51.865Z
On Slack - Having room to be excited 2020-10-10T19:02:42.109Z
On Option Paralysis - The Thing You Actually Do 2020-10-03T11:50:57.070Z
Your Standards are Too High 2020-10-01T17:03:31.969Z
Learning how to learn 2020-09-30T16:50:19.356Z
Seek Upside Risk 2020-09-29T16:47:14.033Z
Macro-Procrastination 2020-09-28T16:07:48.670Z
Taking Social Initiative 2020-09-19T15:31:21.082Z
On Niceness: Looking for Positive Externalities 2020-09-14T18:03:12.196Z
Stop pressing the Try Harder button 2020-09-05T09:10:05.964Z
Helping people to solve their problems 2020-08-31T20:41:04.796Z
Meaningful Rest 2020-08-29T15:50:05.782Z
How to teach things well 2020-08-28T16:44:27.817Z
Live a life you feel excited about 2020-08-21T19:16:17.793Z
On Creativity - The joys of 5 minute timers 2020-08-18T06:26:55.493Z
On Systems - Living a life of zero willpower 2020-08-16T16:44:13.100Z
On Procrastination - The art of shaping your future actions 2020-08-01T10:22:44.450Z
What it means to optimise 2020-07-25T09:40:09.616Z
How to learn from conversations 2020-07-25T09:36:16.105Z
Taking the first step 2020-07-25T09:33:45.111Z
Become a person who Actually Does Things 2020-07-25T09:29:21.314Z
The Skill of Noticing Emotions 2020-06-04T17:48:28.782Z

Comments

Comment by Neel Nanda (neel-nanda-1) on What are fun little puzzles / games / exercises to learn interesting concepts? · 2021-03-18T07:58:55.128Z · LW · GW

The Clearer Thinking Calibrate Your Judgement tool seems worth checking out.

https://www.clearerthinking.org/post/2019/10/16/practice-making-accurate-predictions-with-our-new-tool

Comment by Neel Nanda (neel-nanda-1) on Strong Evidence is Common · 2021-03-16T11:09:48.579Z · LW · GW

I really like this post! I have a concerned intuition around 'sure, the first example in this post seems legit, but I don't think this should actually update anything in my worldview, for the real-life situations where I actively think about Bayes Rule + epistemics'. And I definitely don't agree with your example about top 1% traders. My attempt to put this into words:

1. Strong evidence is rarely independent. Hearing you say 'my name is Mark' to person A might be 20,000:1 odds, but hearing you then say it to person B is like 10:1 tops. Most hypotheses that explain the first event well, also explain the second event well. So while the first sample contains the most information, the second sample contains way less. Making this idea much less exciting.

It's much easier to get to middling probabilities than high probabilities. This makes sense, because I'm only going to explicitly consider the odds of <100 hypotheses for most questions, so a hypothesis with say <1% probability isn't likely to be worth thinking about. But to get to 99% it needs to defeat all of the other ones too

Eg, in the 'top 1% of traders' example, it might be easy to be confident I'm above the 90th percentile, but much harder to move beyond that.

2. This gets much messier when I'm facing an adversarial process. If you say 'my name is Mark Xu, want to bet about what's on my driver's license' this is much worse evidence because I now face adverse selection. Many real-world problems I care about involve other people applying optimisation pressure to shape the evidence I see, and some of this involves adversarial potential. The world does not tend to involve people trying to deceive me about world capitals.

An adversarial process could be someone else trying to trick me, but it could also be a cognitive bias I have, eg 'I want to believe that I am an awesome, well-calibrated person'. It could also be selection bias - what is the process that generated the evidence I see?

3. Some questions have obvious answers, others don't. The questions most worth thinking about are rarely the ones that are obvious. The ones where I can access strong evidence easily are much less likely to be worth thinking about. If someone disagrees with me, that's at least weak evidence against the existence of strong evidence.

Comment by Neel Nanda (neel-nanda-1) on Mentorship, Management, and Mysterious Old Wizards · 2021-02-27T08:05:52.909Z · LW · GW

+1 I went a CFAR camp for high schoolers a few years ago, and the idea that I can be ambitious and actually fix problems in my life was BY FAR the biggest takeaway I got (and one of the most valuable life lessons I ever learned)

Comment by Neel Nanda (neel-nanda-1) on When you already know the answer - Using your Inner Simulator · 2021-02-24T08:51:57.931Z · LW · GW

As a single point of anecdata, I personally am fairly prone to negative thoughts and self-blame, and find this super helpful for overcoming that. My Inner Simulator seems to be much better grounded than my spirals of anxiety, and not prone to the same biases.

Some examples:

I'm stressing out about a tiny mistake I made, and am afraid that a friend of mine will blame me for it. So I simulate having the friend find out and get angry with me about it, and ask myself 'am I surprised at this outcome'. And discover that yes, I am very surprised by this outcome - that would be completely out of character and would feel unreasonable to me in the moment.

I have an upcoming conversation with someone new and interesting, and I'm feeling insecure about my ability to make good first impressions. I simulate the conversation happening, and leaving feeling like it went super well, and check how surprised I feel. And discover that I don't feel surprised, that in fact that this happens reasonably often.

Such a person could also come up with a way they could improve their life, fail to implement it, and then feel guilty when their reality fails to measure up to their imagined future. 

This seems like a potentially fair point. I sometimes encounter this problem. Though I find that my Inner Sim is a fair bit better calibrated about what solutions might actually work. Eg it has a much better sense for 'I'll just procrastinate and forget about this'. On balance, I find that the benefits of 'sometimes having a great idea that works' + the motivation to implement it far outweighs this failure mode, but your mileage may vary.

Comment by Neel Nanda (neel-nanda-1) on When you already know the answer - Using your Inner Simulator · 2021-02-24T08:40:19.547Z · LW · GW

Nice, I really like the approach of 'write up a concrete question -> assume I received a helpful answer -> let my inner sim fill in the blanks about what it says'

Comment by Neel Nanda (neel-nanda-1) on Anti-Aging: State of the Art · 2021-02-12T08:36:16.494Z · LW · GW

Ooh, no. That's super interesting, thanks!

Comment by Neel Nanda (neel-nanda-1) on Anti-Aging: State of the Art · 2021-02-02T10:18:22.738Z · LW · GW
How would writing the question help to convince people? Would it not only be convincing in 5-10 years' time if some of the predictions turn out to be accurate? Or, do you think if consensus on a Metaculus question that prediction X will occur is in and of itself convincing for rationalists? 

I would personally find a consensus on Metaculus pretty convincing (at least, conditional on there being a significant amount of predictions for the question). I find it hard to gauge other people's expertise and how much to defer to them, especially when I just see their point of view. Aggregating many people's predictions is much more persuasive to me, and many of the top Metaculus predictors seem to have good epistemics.

Comment by Neel Nanda (neel-nanda-1) on DALL-E by OpenAI · 2021-01-06T09:25:56.661Z · LW · GW

Interesting! I didn't feel that all, I thought things were pretty artsy/aesthetically pleasing on the whole. Any examples of things that felt nauseating?

Comment by Neel Nanda (neel-nanda-1) on Covid 12/24: We’re F***ed, It’s Over · 2021-01-06T09:22:07.506Z · LW · GW
But indeed, I don't understand why the number of infected people suddenly decreases at the end of November. An explanation would be helpful.

As henryaj says above, the UK was in a national lockdown Nov 5 - Dec 2. Accounting for a lag in catching it -> positive test, that matches the graph reasonably well

Comment by Neel Nanda (neel-nanda-1) on Covid 12/24: We’re F***ed, It’s Over · 2021-01-06T09:18:59.109Z · LW · GW

A really key difference between March and November is that schools were open in November but not March. Though the UK is now in a third lockdown, and it looks like schools won't be re-opening

Comment by Neel Nanda (neel-nanda-1) on Ways to be more agenty? · 2021-01-05T11:07:00.057Z · LW · GW

I think this is a really important question, and something I think a lot about. This is a pretty consistent theme of my blog. Some posts I've written that might help (If you'll forgive the blatant self-promotion):

My favourite: Become a person who actually does things - make 'being an agent' part of your identity, and something to aim for for its own sake. Do something because it feels agenty, not necessarily because it's obviously the best thing to do. It's much easier to get good at being agenty about the right things once you've built the habit in the first place

Notice when you're procrastinating on something, and develop a toolkit for things to do in the moment to ensure you take action

A lot of my failure to be agenty comes from being risk-averse and fearing downsides. Learn to look past the safety of the "default" action of doing nothing, and to get excited about small chances of massive upsides

Have a regular time in your life to reflect on things, and give yourself prompts to notice what you could be being more agenty about. Make sure that noticing and correcting this kind of thing is a default part of your life, rather than something that needs effort and inspiration. Make agency the path of least resistance

If you feel bottlenecked by creativity or ideas, set a 5 minute timer and use the time pressure and urgency to spur yourself into doing things

Comment by Neel Nanda (neel-nanda-1) on Dario Amodei leaves OpenAI · 2021-01-01T13:27:38.020Z · LW · GW

Jack Clark and Chris Olah announced on Twitter that they're also leaving

Comment by Neel Nanda (neel-nanda-1) on Great minds might not think alike · 2020-12-31T16:23:26.427Z · LW · GW

I love " great minds might not think alike"

Comment by Neel Nanda (neel-nanda-1) on Great minds might not think alike · 2020-12-27T15:39:51.059Z · LW · GW

I really enjoyed this post, and it helped put a vague intuition into words, thanks! Strongly upvoted

Comment by Neel Nanda (neel-nanda-1) on To listen well, get curious · 2020-12-19T12:35:57.051Z · LW · GW

Great post! I really strongly agree with this advice, and I think it's one of the most important ways my communication skills have improved over time.

I especially really liked your point about cargo cult science - that active listening often seems trite because people are just repeating thing, without the underlying mental effort. That felt like it helped clarify some disagreements I'd had with people about this approach before, I think I took the underlying mental effort as implicit, and they didn't.

My current favourite way of explaining it, is that the default state of the world is that you've misunderstood, because good communication is hard. You should reflect things back, ask "was that a correct summary?", have the other person point out what's wrong, try again, and keep iterating until they're happy

Comment by Neel Nanda (neel-nanda-1) on Announcing the Technical AI Safety Podcast · 2020-12-08T21:53:34.705Z · LW · GW

This seems like an awesome project! I'm excited to see where this goes

Comment by Neel Nanda (neel-nanda-1) on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More · 2020-12-05T09:36:25.506Z · LW · GW

I found this post valuable for better understanding the perspectives of AI experts who aren't concerned about alignment (my rough take is "they think the alignment problem will be easy, and that the control problem will be easy enough to patch any gaps in that"). And I've found this useful for updating my intuitions about worlds where the people working on TAI are not cautious enough about safety. It's helped update me towards thinking most of the problems come from worlds with subtle problems of alignment, and that people would notice obvious ones.

And I appreciate Ben writing this up - a Facebook thread is a terrible format for a public debate, and I would never have come across this otherwise!

Comment by Neel Nanda (neel-nanda-1) on The Power to Teach Concepts Better · 2020-12-05T09:32:50.803Z · LW · GW

I think being specific and giving examples is one of the most valuable techniques I know for teaching things well. This post does a good job of conveying that insight clearly and intuitively (with many examples!), and helped me flesh out my model of why examples work so well, and how and where to use them

Comment by Neel Nanda (neel-nanda-1) on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2020-12-05T09:31:29.138Z · LW · GW

CAIS is a very different take on what transformative AI might look like than the ones I find most intuitive. I think it's really useful to experience a range of different perspectives to break me out of my cached thoughts.

And I'm grateful to Rohin for writing up this summary! I think this kind of thing is a valuable service for spreading these ideas to more people, who don't want to read a 200 page document.

Comment by Neel Nanda (neel-nanda-1) on Chris Olah’s views on AGI safety · 2020-12-05T09:26:44.746Z · LW · GW

I found this post extremely valuable. This is a novel approach to safety research, which I hadn't come across before, and which I likely would not otherwise have come across without Evan putting in the effort to write this post (and Chris putting in the work to come up with the ideas!).

I personally find interpretability to be a fascinating problem, that I might want to research someday. This post updated me a lot towards thinking that it's a valuable and important problem for achieving alignment.

Further, I am very excited to see more posts like this in general - I think it's extremely good and healthy to bring in more perspectives on the alignment problem, and different paths to success.

Comment by Neel Nanda (neel-nanda-1) on The LessWrong 2019 Review · 2020-12-04T09:26:41.488Z · LW · GW

A few of the links to previous reviews link to the wrong reviews. I noticed 'evhub on "Paul's research agenda FAQ"' and 'From Vanessa Kosoy on "Clarifying AI Alignment":' link to the wrong place

Comment by Neel Nanda (neel-nanda-1) on Group debugging guidelines & thoughts · 2020-10-20T05:48:03.495Z · LW · GW

This is an excellent post. Thanks for writing it!

Comment by Neel Nanda (neel-nanda-1) on What posts do you want written? · 2020-10-20T05:41:11.344Z · LW · GW

I've written up my thoughts on doing (informal) pair debugging from the debugger perspective here

Comment by Neel Nanda (neel-nanda-1) on On Slack - Having room to be excited · 2020-10-13T06:22:44.972Z · LW · GW

Yeah, fair point! Since writing this I've gotten feedback from a few people who think their lives have too much Slack, so I was clearly over-generalising from my own experience and social context.

Though, one supporting point. I think someone can both spend a bunch of time watching TV etc and lack Slack. I'm imagining an archetype of person who both feels always busy and behind, but also has poor executive function and has a lot of aversions, and motivation and procrastination problems (I can think of several friends in this category). Someone who clearly is not actually optimising their time for productivity, but feels overwhelmed. So they don't give themselves permission to use Slack, take time off, etc, even though they implicitly use a bunch of it on procrastination.

Comment by Neel Nanda (neel-nanda-1) on On Slack - Having room to be excited · 2020-10-11T06:26:43.165Z · LW · GW

I agree that Moloch is important, but that wasn't what I was trying to point at in this post. I was trying to point at the failure mode where it feels wrong to have available resources, because you could be spending them - very much an internal failure mode. While I see Moloch as an external failure mode, where outside forces push me towards competition

Comment by Neel Nanda (neel-nanda-1) on Seek Upside Risk · 2020-10-04T20:25:26.170Z · LW · GW

Fair point that that's somewhat hypocritical, and undercuts my point. I think the point still often stands though - this is a bug I have, but it's far from universal; and even for me, the answer to a choice is often obvious. But in those cases it's uninteresting and scarcely feels like a choice.

Comment by Neel Nanda (neel-nanda-1) on Postmortem to Petrov Day, 2020 · 2020-10-04T19:42:33.438Z · LW · GW

Rationalists out of context Tumblr?

Comment by Neel Nanda (neel-nanda-1) on On Destroying the World · 2020-10-03T13:39:28.442Z · LW · GW

Oh, thanks! That sounds really useful when LW is being slow on mobile

Comment by Neel Nanda (neel-nanda-1) on Moral public goods · 2020-10-03T07:50:09.647Z · LW · GW
(In retrospect people also didn't like the big and unrealistic numbers, so I could have just made them 10 and 100 instead. I generally overestimated the extent to which readers would separate the simple quantitative point, which I wanted to make in the shortest way possible but didn't think about that much, from other features of the scenario.)

For what it's worth, I think the big, unrealistic numbers and framing of the example made this feel like a much more valuable intuition pump to me, so thanks! (Key point I took from it: It is actually perfectly reasonable to favour taxation while being unwilling to donate yourself, and there's a big gap between these two thresholds)

Comment by Neel Nanda (neel-nanda-1) on On Destroying the World · 2020-10-03T06:28:31.039Z · LW · GW
Do you agree that the literal monetary value of the site being down for a day is (likely) greater than that? Never mind the symbolism, there's just like two thousand people who visit the Frontpage in a day, around half of whom might pay something in the range of $1-$10 for the site not to be randomly down on them for a day?

Interesting. My intuition was "24 hours isn't a long time, and it's just the front page, people can surely come back later". But while that's a small inconvenience, $1 worth of inconvenience sounds plausible. So yeah, fair point! $1-10k actually seems like a fair value for this, thanks

EDIT: Reading the other comments on that point, it seems reasonable that LessWrong power users are best able to work around the outage, and the people who'd be most inconvenienced. And I expect most of those people to not know about GW (what is GreaterWrong anyway?), but this to correlate with caring less about the existence of LW. So I guess I'd lower the estimate a bit

Comment by Neel Nanda (neel-nanda-1) on Babble challenge: 50 ways of sending something to the moon · 2020-10-02T06:32:45.673Z · LW · GW

I got a bit stuck around halfway through, and started to interpret it more and more liberally to squeeze out some more angles on it

Comment by Neel Nanda (neel-nanda-1) on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T11:30:11.416Z · LW · GW

Nice! #13 is my favourite

Comment by Neel Nanda (neel-nanda-1) on Industrial literacy · 2020-10-01T11:24:58.387Z · LW · GW

I agree it's a bit more nuanced than it seems at face value - my alternate universe self would likely have different friends because some of my friends would have died in childhood, and this wouldn't matter so much to my alternate self. But to my current self, it's a super big deal if half of the people I currently care about would have died young! And I think that's the point Jason is making.

Comment by Neel Nanda (neel-nanda-1) on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T09:26:08.110Z · LW · GW

Beautiful! #14 and #22 were inspired

Comment by Neel Nanda (neel-nanda-1) on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T07:27:00.923Z · LW · GW

Ah, sorry, the original comment was an experiment with multi-line spoiler tags. I've now given up on that, and linked to a Google doc

Comment by Neel Nanda (neel-nanda-1) on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T07:23:37.591Z · LW · GW

Inspired!

I'm very entertained by the amount of items our lists converged on

Comment by Neel Nanda (neel-nanda-1) on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T06:42:38.468Z · LW · GW

Fun exercise, thanks for suggesting it!

I can't figure out multi-line spoiler tags, so I've put my 50 in here (a lot of these are blatantly cheating, but 50 is surprisingly hard!)

Comment by Neel Nanda (neel-nanda-1) on The Skill of Noticing Emotions · 2020-10-01T05:54:56.036Z · LW · GW

Huh. Somehow that has never come up before...

My default reaction when holding my phone was to either put the phone down, or just to tap my fingers together

Comment by Neel Nanda (neel-nanda-1) on On Destroying the World · 2020-09-30T06:49:29.789Z · LW · GW
We'll come to this in a moment, but first I want to address his final sentence: "Like, the email literally said you were chosen to participate because we trusted you to not actually use the codes". I've played lot of role-playing games back in my day and often people write all kinds of things as flavour text. And none of it is meant to be taken literally.
I want to point out a few things in particular. Firstly, the email was sent out to 270 users which from my perspective made it seem that the website was almost guaranteed to go down at some time, with the only question being when (I was aware the game was played last year, but I had no memory of the outcome or the number of users).
Beyond this, the fact that the message said, "Hello Chris_Leong" and that it was sent to 270 users meant that it didn't really feel like a personal request from Ben Pace. Additionally, note the somewhat jokey tone of the final sentence, "I hope to see you in the dawn of tomorrow, with our honor still intact". Obviously, someone pressing the button wouldn't damage the honor or reputation of Less Wrong and so it seemed to indicate that this was just a bit of fun..

I resonate with basically all of this from Chris' post

Trying to introspect a bit more, I think that unseeing the cultural context is hard, and that that context massively affects your priors of how to interpret something like this. My first reaction was that the email was a joke. Then, realising it wasn't a joke, being confused by why I'd been sent it (the email began Dear Neel_Nanda_1 , not Dear Neel, which made it seem less like I'd been specially chosen). And then, realising that they'd actually changed the Front Page, and done this before, being really entertained at the idea of celebrating Petrov Day in this way. But it felt like "this is a fun, slightly over the top way of celebrating, and we want to see interesting and fun things happen".

I think my priors are so far from people taking something as minor as "the LW frontpage goes down for a day" seriously, that it took me reading the thread under jefftk last year about selling his launch codes for counterfactual donations, and seeing people genuinely debate "is this worth more than $1.6K" to realise that people took the symbolic value really seriously. (And I'm still pretty confused by this - if I had read about Petrov Day 2019, and saw that someone blew up the front page for a large donation to AMF, that would probably marginally raise my opinion of LessWrong. And I utterly do not understand people who would price as over $10K, let alone $1m)

Things that I think would have changed my intuitive framing:

  • Having the email drop out of "RPG flavour text mode" and be explicit about the cultural context and how seriously people took it
  • Having the downside be actually, meaningfully high (eg, LW the website going down for a month) ((though I think this is net bad for the obvious reasons)). As is, it didn't feel like something to be taken seriously, because the actual stakes were low.
  • Being given context and invited before Petrov day, and needing to take some agency to accept. I think this would have made the notion of "you are being invited and trusted clearer". I was surprised by receiving the email and don't see myself as a notable LW contributor, and assumed eg this was automatically sent to the 270 most recent posters, or highest karma users or something, rather than having been hand-picked by Ben

(In writing all of this, I feel like I'm being unfair to Ben/implying all of this should have been obvious to you guys. That's not at all my intent, and I hope you take this in the spirit of "an attempt to narrate my internal experience, that might help with orchestrating future things")

Idk, hope all that helped. This kind of thing is far outside my standard conception of how people and communities work, and I'm not used to people taking symbols this seriously. And I'm surprised by how obvious this all feels to people with the cultural context

Comment by Neel Nanda (neel-nanda-1) on Seek Upside Risk · 2020-09-30T06:32:35.417Z · LW · GW

Thanks! Yeah, I definitely agree that literally maximising for EV can be bad. The reason I heavily emphasised that is to convey the key point that you're trying to make decisions under uncertainty, uncertainty is an inherent fact of life, and that, at least for me, thinking about EV leads to systematically better decisions. Because I'm sufficiently bad by default at accounting for uncertainty, that a focus on EV pushes me in the right direction.

In practice, the decision theory would be along the lines of "try estimating EV. If the answer is obviously, massively positive then do it, otherwise think harder". (In which case maximise E(log(X)) and E(X) should give the same answer). And the post was similarly aimed at people who have such a strong bias, that thinking about EV is a nudge in the right direction.

Would you have preferred the post if framed around E(log(X))?

Comment by Neel Nanda (neel-nanda-1) on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T16:47:30.517Z · LW · GW

I think "doesn't want to ruin other people's fun or do anything significant" feels more accurate than "do not engage" here?

Comment by Neel Nanda (neel-nanda-1) on On Destroying the World · 2020-09-28T15:27:31.814Z · LW · GW

Thanks for writing this! It seemed like people were being unwarrantedly unfair to you in that thread.

My personal experience was getting the email from Ben, and this being the first I'd ever heard about LessWrong's approach to Petrov Day. And I somewhat considered pressing the button for the entertainment value, until I read the comments on the 2019 thread and got a sense of how seriously people took it. 

I think it's completely reasonable to not have gotten that cultural context from the information available, and so not to have taken the whole thing super seriously.

And personally I found it fairly entertaining/education how all of this turned out (though it's definitely sad for all the Pacific time people who were asleep throughout the whole thing :( )

EDIT: Just wanted to add that, now I have the cultural context, I think this was all an awesome celebration and I'm flattered to have been invited to be a part of it! My main critique was that I think it's extremely reasonable for Chris not to have had the relevant context, but many of those commenting seem to have taken this background context as a given, since it's clear to them.

Comment by Neel Nanda (neel-nanda-1) on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T13:39:28.371Z · LW · GW

Awww. I can't decide whether to be annoyed with petrov_day_admin_account , or to appreciate their object lesson in the importance of pre-commitment and robust epistemologies (I'm leaning towards both!)

Comment by Neel Nanda (neel-nanda-1) on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T11:37:24.192Z · LW · GW

Well, that lasted a disappointingly short time :(

Comment by Neel Nanda (neel-nanda-1) on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T08:39:29.004Z · LW · GW

I'm curious why this was designed to be non-anonymous? It feels more in the spirit of "be aware I could destroy something, and choosing not to" if it doesn't have cost to me, beyond awareness that destruction is sad

Comment by Neel Nanda (neel-nanda-1) on Updates Thread · 2020-09-09T18:41:30.191Z · LW · GW

I'd be curious to hear more about this shift, and how long it took before it became noticeable - exercising more is something I'm currently trying to motivate myself to do!

Comment by Neel Nanda (neel-nanda-1) on Stop pressing the Try Harder button · 2020-09-06T09:01:56.417Z · LW · GW

I agree that the phrasing as-is is a bit hyperbolic - sometimes the Try Harder button is useful, and it's definitely a tool worth having in your toolkit. But I also think people majorly over-use it, and that this is unsustainable, high-cost and rarely works in the long-term. And so "stop planning to press it" feels too weakly phrased. At least for me, I rarely explicitly plan to use it, it's just implicitly planned when I come up with a vague, fuzzy plan. And so an injunction to not plan around it doesn't feel sufficient for fixing the problem

Maybe "Stop relying on the Try Harder button"? The main point I want to make is that, if you notice yourself using it on a regular basis, alarm bells should start going off in your mind. Something is going wrong with your life systems, this is important, and should be a priority to fix. And I think there are ways that removing it as an option at all can help you to develop much healthier habits.

Comment by Neel Nanda (neel-nanda-1) on Stop pressing the Try Harder button · 2020-09-05T15:26:51.689Z · LW · GW

Thanks! Very strongly agreed, and I consider this the flip side of the point I was making in this post. I see being effective as breaking down into two parts: Having realistic and well-calibrated standards for how much you can get done, and being effective at executing on what you can do.

Comment by Neel Nanda (neel-nanda-1) on Helping people to solve their problems · 2020-09-01T19:49:15.365Z · LW · GW

Interesting, I'd be curious to hear more about which parts of this reminded you of classical rhetoric?

Comment by Neel Nanda (neel-nanda-1) on How to teach things well · 2020-08-29T10:00:49.433Z · LW · GW

Thanks!

Anchoring the new set of intuitions with a succinct anchor phrase or image that ideally has conceptual hooks into the relevant problem domains so that the concept automatically gets triggered in the situations in which it is useful

Strongly agreed, I've been very pleasantly surprised by how valuable this approach is. I think having a clear label to important intuitions is one of the really valuable things I've gotten from the rationalist community. When writing blog posts, I try fairly hard to give clear labels to the key ideas and to put them in bold.

Creating a toy example of the core concept that has moving parts the student can then move themselves to see how other parts move (conceptually)

I'd be curious to see any examples of this you have in mind? I'm super excited about this as a form of learning, but struggle to imagine a specific example for anything I've tried teaching. This seems better suited to tutoring 1 on 1 than to larger groups/talks, I think?