Posts

My Overview of the AI Alignment Landscape: Threat Models 2021-12-25T23:07:10.846Z
My Overview of the AI Alignment Landscape: A Bird's Eye View 2021-12-15T23:44:31.873Z
What's Stopping You? 2021-10-21T16:20:18.911Z
Intentionally Making Close Friends 2021-06-27T23:06:49.269Z
Noticing and Overcoming Bias 2021-03-06T21:06:36.495Z
When you already know the answer - Using your Inner Simulator 2021-02-23T17:58:29.336Z
Overcoming Helplessness 2021-02-22T15:03:45.869Z
The World is Full of Wasted Motion 2021-02-02T19:02:48.724Z
Retrospective on Teaching Rationality Workshops 2021-01-03T17:15:00.479Z
Asking For Help 2020-12-27T11:32:32.462Z
On Reflection 2020-12-14T16:14:29.818Z
My case for starting blogging 2020-10-22T17:43:51.865Z
On Slack - Having room to be excited 2020-10-10T19:02:42.109Z
On Option Paralysis - The Thing You Actually Do 2020-10-03T11:50:57.070Z
Your Standards are Too High 2020-10-01T17:03:31.969Z
Learning how to learn 2020-09-30T16:50:19.356Z
Seek Upside Risk 2020-09-29T16:47:14.033Z
Macro-Procrastination 2020-09-28T16:07:48.670Z
Taking Social Initiative 2020-09-19T15:31:21.082Z
On Niceness: Looking for Positive Externalities 2020-09-14T18:03:12.196Z
Stop pressing the Try Harder button 2020-09-05T09:10:05.964Z
Helping people to solve their problems 2020-08-31T20:41:04.796Z
Meaningful Rest 2020-08-29T15:50:05.782Z
How to teach things well 2020-08-28T16:44:27.817Z
Live a life you feel excited about 2020-08-21T19:16:17.793Z
On Creativity - The joys of 5 minute timers 2020-08-18T06:26:55.493Z
On Systems - Living a life of zero willpower 2020-08-16T16:44:13.100Z
On Procrastination - The art of shaping your future actions 2020-08-01T10:22:44.450Z
What it means to optimise 2020-07-25T09:40:09.616Z
How to learn from conversations 2020-07-25T09:36:16.105Z
Taking the first step 2020-07-25T09:33:45.111Z
Become a person who Actually Does Things 2020-07-25T09:29:21.314Z
The Skill of Noticing Emotions 2020-06-04T17:48:28.782Z

Comments

Comment by Neel Nanda (neel-nanda-1) on Use Normal Predictions · 2022-01-13T02:02:59.939Z · LW · GW

Thanks, I really enjoyed this post - this was a novel but persuasive argument for not using binary predictions, and I now feel excited to try it out!

One quibble - When you discuss calculating your calibration, doesn't this implicitly assume that your mean was accurate? If my mean is very off but my standard deviation is correct, then this method says my standard deviation is way too low. But maybe this is fine because if I have a history of getting the mean wrong I should have a wider distribution?

Comment by Neel Nanda (neel-nanda-1) on Rational Breaks: a better way to work · 2022-01-10T07:16:13.943Z · LW · GW

Generalized Pomodoros?

Comment by Neel Nanda (neel-nanda-1) on My Overview of the AI Alignment Landscape: A Bird's Eye View · 2022-01-09T17:41:01.929Z · LW · GW

Thanks for the feedback! That makes sense, I've updated the intro paragraph to that section to:

There are a range of agendas proposed for how we might build safe AGI, though note that each agenda is far from a complete and concrete plan. I think of them more as a series of confusions to explore and assumptions to test, with the eventual goal of making a concrete plan. I focus on three agendas here, these are just the three I know the most about, have seen the most work on and, in my subjective judgement, the ones it is most worth newcomers to the field learning about. This is not intended to be comprehensive, see eg Evan Hubinger’s Overview of 11 proposals for building safe advanced AI for more.

Does that seem better?

For what it's worth, my main bar was a combination of 'do I understand this agenda well enough to write a summary' and 'do I associate at least one researcher and some concrete work with this agenda'. I wouldn't think of corrigibility as passing the second bar, since I've only seen it come up as a term to reason about or aim for, rather than as a fully-fledged plan for how to produce corrigible systems. It's very possible I've missed out on some important work though, and I'd love to hear pushback on this

Comment by Neel Nanda (neel-nanda-1) on My Overview of the AI Alignment Landscape: A Bird's Eye View · 2021-12-29T02:53:26.141Z · LW · GW

Thanks a lot for the feedback, and the Anki cards! Appreciated. I definitely find that level of feedback motivating :)

These categories were formed by a vague combination of "what things do I hear people talking about/researching" and "what do I understand well enough that I can write intelligent summaries of it" - this is heavily constrained by what I have and have not read! (I am much less good than Rohin Shah at reading everything in Alignment :'( )

Eg, Steve Byrnes does a bunch of research that seems potentially cool, but I haven't read much of it, and don't have a good sense of what it's actually about, so I didn't talk about it. And this is not expressing an opinion that, Eg, his research is bad.

I've updated towards including a section at the end of each post/section with "stuff that seems maybe relevant that I haven't read enough to feel comfortable summarising"

Comment by Neel Nanda (neel-nanda-1) on My Overview of the AI Alignment Landscape: A Bird's Eye View · 2021-12-25T23:50:08.779Z · LW · GW

Thanks for the appreciation!

If you're trying to make it more legible to outsiders, you should consider defining AGI at the top.

Good idea, I just added this note to the top:

Terminology note: There is a lot of disagreement about what “intelligence”, “human-level”, “transformative” or AGI even means. For simplicity, I will use AGI as a catch-all term for ‘the kind of powerful AI that we care about’. If you find this unsatisfyingly vague, OpenPhil’s definition of Transformative AI is my favourite precise definition.
Comment by Neel Nanda (neel-nanda-1) on 2021 AI Alignment Literature Review and Charity Comparison · 2021-12-24T14:58:33.575Z · LW · GW

Thanks! I'm probably not going to have time to write a top-level post myself, but I liked Evan Hubinger's post about it.

Comment by Neel Nanda (neel-nanda-1) on 2021 AI Alignment Literature Review and Charity Comparison · 2021-12-23T22:44:56.939Z · LW · GW
I do wonder if vision problems are unusually tractable here; would it be so easy to visualise what individual neurons mean in a language model?

We actually released our first paper trying to extend Circuits from vision to language models yesterday! You can't quite interpret individual neurons, but we've found some examples of where we can interpret what an individual attention head is doing.

Comment by Neel Nanda (neel-nanda-1) on Where can one learn deep intuitions about information theory? · 2021-12-17T21:59:58.948Z · LW · GW

I really love the essay Visual Information Theory

Comment by Neel Nanda (neel-nanda-1) on How to teach things well · 2021-12-15T10:51:21.635Z · LW · GW

Self review: I'm very flattered by the nomination!

Reflecting back on this post, a few quick thoughts:

  • I put a lot of effort into getting better at teaching, especially during my undergrad (publishing notes, mentoring, running lectures, etc). In hindsight, this was an amazing use of time, and has been shockingly useful in a range of areas. It makes me much better at field-building, facilitating fellowships, and writing up thoughts. Recently I've been reworking the pedagogy for explaining transformer interpretability work at Anthropic, and I've been shocked at how relevant all of this is.
    • A related idea is that of the Pareto Frontier. Most people are bad at teaching, this leads to eg Research Debt in academia. I'm a pretty great teacher, but not exactly world-class. But I'm a great mathematician, and trying to become a great AI Safety researcher, and there are very, very few people who are great at both - this gives me a lot of room to explore my comparative advantage by eg writing field-building docs
    • I wish I'd better emphasised just how useful a skill this is
  • A lot centres on teaching in specific contexts. This is reasonable, since it's what I know, but I wish I'd better clarified what would and would not generalise - I'm afraid people who see this post will bounce off as it's not relevant to them
  • I wish I'd given more caveats about teaching gone wrong. My experiences teaching younger people who view me as high-status is that it's very easy to appear over-confident. I try to caveat what I say, but I tend to present as fairly confident, and people often take me way too seriously. While the techniques I present here are v effective at teaching, they have the flipside of better inserting my knowledge into the student's system 1 and bypassing some of their mental filters, which can be bad and eg lead to groupthink and lowered agency.
    • Some, such as Socratic method, are better on this front by at least giving me chances to notice if what I'm teaching is wrong
    • Sometimes it may be good to deliberately be a bad teacher, to teach the students agency and give them room to grow. on their own and to form their own ideas. It's worth checking for this - I just reflexively use good teaching technique nowadays and it's hard to suppress
  • Some ideas such as a knowledge graph are vague intuitions that it would have been good to operationalise more

With all that said, I'd only been blogging for 3 weeks when I wrote this post, and I wrote it in an afternoon, so I'm really happy with this as an artefact to come out of that! I am so, so happy I decided to do a month of daily blogging

Comment by Neel Nanda (neel-nanda-1) on Omicron Variant Post #1: We’re F***ed, It’s Never Over · 2021-11-28T13:55:07.425Z · LW · GW

What fraction of these fizzled out because they were displaced by a fitter variant vs just not spreading further? That seems very important for figuring out how much to freak out

Comment by Neel Nanda (neel-nanda-1) on Sci-Hub sued in India · 2021-11-14T11:21:41.778Z · LW · GW

+1, I was pretty surprised and confused by the 37% stat. If basically all of the labour here comes from taxpayer funded science, where on earth is 63% of the revenue going?!

Comment by Neel Nanda (neel-nanda-1) on App and book recommendations for people who want to be happier and more productive · 2021-11-06T22:33:12.336Z · LW · GW

Thanks for the post! I love a lot of these, and haven't come across some :)

Google docs quick create. Shortcut key or single click to automatically create a new google document or spreadsheet. Saves a ton of time. 

The URL doc.new or sheet.new also does this, and is pretty low friction (though not quite single click!) Works on any computer though

Quickcompose. You know how easy it is to get distracted by your inbox when you need to send an email? Quick compose makes it so that you can open up a window that’s just a compose window so you can’t get distracted by new emails. 

I really like the extension Inbox When Ready - it hides your inbox by default, unless you click on the 'show inbox' button. This is enough to reduce 'compulsively open email and check things', as well as giving this functionality

Comment by Neel Nanda (neel-nanda-1) on Feature idea: Notification when a parent comment is modified · 2021-10-21T20:44:15.479Z · LW · GW

I feel like I make enough minor edits to my comments (typos etc) that this would be really annoying - I'd feel significantly more constrained about my ability to make edits, because I'd know it would spam to people. Maybe having a "send notifications?" toggle would help

Comment by Neel Nanda (neel-nanda-1) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T14:42:06.414Z · LW · GW

As a counter-point, my day was made significantly better by the front page being nuked in 2020 - it was exciting, novel, hilarious (by my lights - clearly not to some people), made some excellent points about phishing and security, and gave me opportunities to dissect why people oriented to this event differently from me. I expect my experience would have been less good last year had the phishing attempt not happened, and we all simply coordinated. More generally, when a website does something unusual and novel like this, I feel like the value of novelty and interestingness can outweigh the costs of a single day of disrupted use?

I'd further argue that the people highly invested in this seem much more invested in the abstract ideas of trust, community, shared ritual and cohesion, more so than the object level of the frontpage being down (besides, people can always use greaterwrong.com )

Comment by Neel Nanda (neel-nanda-1) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T14:35:21.803Z · LW · GW

If it helps, here's a comment I wrote last year trying to narrate my internal experience of reading the email (I then read the 2019 threads and eventually twigged how seriously people took it, but that was strongly not my prior - it wouldn't even have occurred to me to ask the question 'do people take this more seriously than a game?')

Comment by Neel Nanda (neel-nanda-1) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T13:30:48.830Z · LW · GW

I was one of 270 last year and am one of 100 this year, I did not understand the context last year. Empirically, neither did Chris last year. Multiple people on the EA Forum have commented about not understanding the context

Comment by Neel Nanda (neel-nanda-1) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T11:02:52.136Z · LW · GW

The problem is that people are entered aa a situation where they don't necessarily understand the context and cultural expectations other people may have, could very reasonably misunderstand things, but are exposed to dede real and meaningful social risks if they do misunderstand things. Framings lakelike "sometimes you get random responsibilities" ONLY make sense a mutual understanding that thesethe situation is taken seriously, which empirically was obviously ot universal here.

Comment by Neel Nanda (neel-nanda-1) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-26T23:05:01.341Z · LW · GW

The obvious thing is to ask people to consent before entering the game? It's weird to get an email, out of the blue, with launch codes, telling you that you are now part of this game. While an email that spells out some of the explicit norms, and asks people to opt-in, seems great.

A light-touch intervention could just be giving people a link to click to get the launch codes, that shows some text spelling out norms like this, and ask people to only click the link if they actually want to participate.

EDIT: To be clear, I am participating in this, and would have opted-in - I just think it's a really bad norm to not ask for consent first, when we're putting people in a situation with real risks and social consequences, and with wildly differing perceptions of the depth of meaning in this event.

Comment by Neel Nanda (neel-nanda-1) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-26T18:10:17.389Z · LW · GW

Same happened with me, I thought it was an issue with page loading (I was using a very slow browser, and it took a few seconds to correct)

Comment by Neel Nanda (neel-nanda-1) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-26T16:03:52.382Z · LW · GW

Mutual Assured Destruction just isn't the same when you can see for sure whether you were nuked

Comment by Neel Nanda (neel-nanda-1) on Review of A Map that Reflects the Territory · 2021-09-13T20:44:36.126Z · LW · GW

Just noting that I had the opposite reaction - I was pleasantly surprised by the fun style after the formal framing, and this made the whole thing more fun for me

Comment by Neel Nanda (neel-nanda-1) on Covid 9/9: Passing the Peak · 2021-09-10T18:17:35.024Z · LW · GW
People even take taxis over it, and I’m confident that if you’re cutting back on mass transit you should dramatically cut back on taxis.

I'm curious why you think this? A taxi with the windows down seems about as well ventilated as a subway with filtered air, and you're trading one other person 1-2m away for having a bunch of people around on a subway. When I ran it through microcovid, a taxi seemed significantly safer (unless the subway is unusually uncrowded, eg it's late at night).

Is the argument that taxi drivers are significantly more likely to be infected than the typical person on the subway, given their job?

Comment by Neel Nanda (neel-nanda-1) on Coordination Schemes Are Capital Investments · 2021-09-07T17:43:45.451Z · LW · GW

Instead of arguing for hours about who got which room in a new apartment, I just wrote down my true preference for how much I was willing to pay for each room. Then I automagically got assigned a room that was cheaper than I had been willing to pay for it.

I'm confused by how this second price auction worked. If there was just one room, I see how you'd do a second price auction to figure out who wins and at which price they get it, but how does it work when there are multiple rooms, and each person purchases exactly one room?

Comment by Neel Nanda (neel-nanda-1) on LessWrong is providing feedback and proofreading on drafts as a service · 2021-09-07T13:48:42.995Z · LW · GW

This seems like a really great initiative, I'm excited to see how it goes.

How high a bar should I set for using this service? I have basically no posts where I'd post this with editing help but would not post it on my own, but would generally appreciate editing help on basically every post I make.

Comment by Neel Nanda (neel-nanda-1) on Training My Friend to Cook · 2021-09-02T22:59:41.128Z · LW · GW

Thanks for the detailed follow-up!

I agree that the distinction between someone explicitly asking to be taught a habit and someone going on a date is important. And I agree that the line between operant conditioning and just not being a dick or otherwise being a good teacher seems a bit blurry.

The remaining thing I feel uncomfortable about is the intentionality here. Lsusr clearly knew what they were doing, and were working towards a long-term plan re shaping Brittany's motivation system. I think this is quite different from just teaching a friend to the best of your ability, or taking pains to avoid actively being a dick

Comment by Neel Nanda (neel-nanda-1) on Training My Friend to Cook · 2021-09-01T21:01:58.755Z · LW · GW

I think we're talking past each other. For me, the key point is that lsusr took actions designed to manipulate her emotions and intuitive reactions to things, and clearly did this systematically towards a clear end goal. I call this manipulation, and think that for applying manipulation to be ethical, the person needs to consent to the manipulation, not just to the end goal of learning how to cook. Everything lsusr has said indicated that she consented to wanting to learn how to cook, not to being manipulated like this.

For example, say I'm dating a girl, we're both excited about each other, and want to feel more excited about each other. Even if I know all of this, I would consider it deeply unethical to use operant conditioning to get her to fall more deeply for me.

I saw a comment from lsusr that he sent this post to the friend, and she feels fine with it, which makes me feel better about this whole thing. But I stand by the general principle of "don't manipulate friends without their explicit and knowing consent"

Comment by Neel Nanda (neel-nanda-1) on Training My Friend to Cook · 2021-09-01T17:05:52.176Z · LW · GW

As far as I can tell, those comments say "she was enthusiastic about learning how to cook", not "she was enthusiastic for me manipulating her into being intrinsically excited about cooking". I think there's a very important difference

Comment by Neel Nanda (neel-nanda-1) on Training My Friend to Cook · 2021-08-31T16:52:15.557Z · LW · GW

Strongly downvoted. It sounds like the end result here was good, but I feel extremely uncomfortable with the manipulative undertones and lack of agency given to your friend throughout all this. Did she consent to any of this before you started the project, and did you explain to her how you intend to change her motivation system around cooking before trying this? This is an extremely important detail which I didn't see mentioned - I think you should only try to fix other people's lives if they explicitly ask you to.

Comment by Neel Nanda (neel-nanda-1) on OpenAI Codex: First Impressions · 2021-08-13T19:29:48.534Z · LW · GW

Still, for competing against top-notch programmers, top 100 is quite a feat. I mean, contrast the statistics below (Codex vs Avg. player):

I wouldn't read too much into this - the challenge was buggy and slow enough that I almost ragequit, and it took me about an hour to start submitting, I expect many people had similarly bad experiences

Comment by Neel Nanda (neel-nanda-1) on What was my mistake evaluating risk in this situation? · 2021-08-05T09:39:44.952Z · LW · GW
Media has a strong incentive to cause hype over things that aren't really dangerous trends

I think the inference here was 'media has a strong incentive to cause hype over stuff that doesn't matter, so surely they have an even stronger incentive to cause hype over stuff that is actually dangerous'. Empirically, this was wrong, but I'm confused about why!

Comment by Neel Nanda (neel-nanda-1) on What made the UK COVID-19 case count drop? · 2021-08-03T12:45:26.666Z · LW · GW

This seems highly unlikely. There hasn't been a significant drop in testing, and Scotland (which saw an earlier peak) has also seen a drop in hospitalizations, which are much harder to fake.

Comment by Neel Nanda (neel-nanda-1) on What made the UK COVID-19 case count drop? · 2021-08-03T12:43:43.441Z · LW · GW

The thing we care about here is the actual weather in the UK over the last few weeks, not the average climate data. In the last few weeks there's been a bit of a heatwave, and everything has been dry and sunny (at least, in London).

Comment by Neel Nanda (neel-nanda-1) on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-08-02T12:23:27.069Z · LW · GW

I'm fairly sure that the .3% was averaged across the 5% if people reporting long term symptoms. The vast majority will be mild, while a small fraction will really suck (I think, given the model of the post)

Comment by Neel Nanda (neel-nanda-1) on Torture vs Specks: Sadist version · 2021-08-01T13:55:53.718Z · LW · GW

Do you mean negative utilitarianism would get them to choose torture, rather than dust specks? I would have considered both to be forms of suffering.

Comment by Neel Nanda (neel-nanda-1) on Delta variant: we should probably be re-masking · 2021-07-25T12:00:15.141Z · LW · GW
Assume really long covid scales similarly to death and hospitalization

This doesn't at all feel obvious to me? At least, I'd put a decent (>20%) chance that this is not true. Eg Long COVID isn't that correlated with hospitalisation

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-07-24T14:10:17.155Z · LW · GW
It takes about 200 hours of investment in the space of a few months to move a stranger into being a good friend.

My guess is that this number varies a lot between people? I can think of multiple friendships that have felt exciting and close within maybe 3-4 one-on-one encounters of 2-4 hours each.

Comment by Neel Nanda (neel-nanda-1) on Ask Not "How Are You Doing?" · 2021-07-22T12:12:01.140Z · LW · GW

I like "what's the most exciting thing to happen to you recently?" as a replacement/follow-up to how are you - I find it often sparks interesting things

Comment by Neel Nanda (neel-nanda-1) on ($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? · 2021-07-22T12:02:51.239Z · LW · GW

it's likely that the UK will offer third doses of the original Pfizer vaccine to everyone over 50 or especially at risk, some time towards the end of this year.

Interesting, do you have a source for that?

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-07-12T20:46:49.088Z · LW · GW

Thanks, fixed. LessWrong has a bug where it doesn't like links which don't begin with https://

Comment by Neel Nanda (neel-nanda-1) on Announcing My Free Online Course "Original Seeing With a Focus On Life" · 2021-07-08T10:16:13.312Z · LW · GW

For me, clicking and dragging works to see later quotes. I found this unintuitive though

Comment by Neel Nanda (neel-nanda-1) on What's the effective R for the Delta variant of COVID-19? · 2021-07-03T04:56:23.111Z · LW · GW

Compared to the baseline of the US and Europe, as shown in the OWID source I linked

Comment by Neel Nanda (neel-nanda-1) on What's the effective R for the Delta variant of COVID-19? · 2021-07-02T14:58:04.478Z · LW · GW

The most relevant UK COVID policy (see full guidelines here):

Indoors gatherings are up to 6 people, outdoor gatherings are up to 30. The government is also distributing free lateral flow tests to everyone, though I'm not sure how high uptake is (I have seen very little marketing about this :( ). People are mostly still working from home, though some offices are slowly re-opening. Schools and universities are fully open.

Comment by Neel Nanda (neel-nanda-1) on What's the effective R for the Delta variant of COVID-19? · 2021-07-02T14:53:58.797Z · LW · GW

Vaccine hesitancy is surprisingly low in the UK (as a UK resident, I highly approve). See Our World In Data. Possible factors are that the NHS is trusted and popular, and our regulators have been generally more competent and a bit less risk averse (eg only pausing AstraZeneca for under 40s)

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-07-01T12:23:09.035Z · LW · GW

Ah, sorry, that sentence was badly worded. I completely agree that a friend reaching out is positive evidence. Though it's still not that strong - highly conscientious people are often good at reaching out to a lot of people, even if they're just doing this out of perceived social obligation. In general, people's bar for reaching out will vary wildly, and so the strength of evidence will vary a lot. You can probably tell whether your friend is unusually conscientious though

The point of that paragraph was that people often interpret the repeated absence of reaching out as strong negative evidence, which I strongly disagree with.

Comment by Neel Nanda (neel-nanda-1) on What precautions should fully-vaccinated people still be taking? · 2021-06-30T17:22:23.237Z · LW · GW

The [best source I've found] (https://institute.global/policy/hidden-pandemic-long-covid) finds a 30% reduction in P(Long COVID | infection after 2 vaccine doses). Infection reduction is about 85%, so total risk reduction is about 90%, MUCH less than the risk reduction for hospitalisation.

The study is based on 3,000 infected patients, all over 60, unclear how it generalises to younger people.

In general, there is SOME good quality research on long COVID, and it seems obvious to me that it is a legitimate thing and respects a good fraction of the harm of the pandemic. Even if overall research is much less high quality than I want.

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-06-29T09:29:35.033Z · LW · GW

All fair points. There are always risks, and always tail risks of super bad outcomes. But also the positive upside risk of excellent outcomes, and 'finding a new close friend' definitely qualifies as this for me. Ultimately, everything is a cost-benefit calculation. For me, this strategy has been overwhelmingly worth it, and I refuse to let the fear of tail risks close off such vast amounts of potential value. But it's hard to compare small probabilities of very bad or very good outcomes to each other, and maybe the opposite trade-off is correct for other people? Idk, my guess is that most people have a significant bias towards paranoia and risk-aversion, not the other way round. I'd also guess it depends on the social circles you move in, and the base rate for very bad outcomes. A party with friends-of-friends will probably have pretty different base rates to random strangers?

Vulnerability is not just an imaginary weakness that should be overcome; it may also point to something real.

Agreed! I'm arguing that most people have much higher barriers to being vulnerable than they should, and that many things that feel vulnerable to share really aren't that dangerous to share. That doesn't mean nothing vulnerability protects is worth protecting. Eg, sharing my deepest insecurities is a pretty bad idea, if the person can then turn around and use them to cause me a lot of pain.

My guess is that most people shy way too far away from being vulnerable, and being nudged towards 'just say fuck it and practice being vulnerable' will get them closer to the optimal amount of vulnerability. And that it's probably much harder to overshoot and end up too vulnerable, if you're already someone who has major issues with it.

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-06-29T09:14:19.495Z · LW · GW

Awesome! I also found that podcast episode super inspiring. Are there any techniques you're particularly excited to try?

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-06-29T09:13:42.725Z · LW · GW

Ah, I didn't notice the paywall. Thanks for collecting those!

I also like Spencer Greenberg's Life-Changing Questions and Askhole (again, most of these are unsuitable, but there are some gems in there)

Comment by Neel Nanda (neel-nanda-1) on What precautions should fully-vaccinated people still be taking? · 2021-06-28T22:12:52.169Z · LW · GW
It is obviously correct to wear a mask only if you do not have access to a respirator or PAPR.

Sure, I'd agree with this. Things like N95s and P100s are much better than cloth or surgical masks.

Comment by Neel Nanda (neel-nanda-1) on What precautions should fully-vaccinated people still be taking? · 2021-06-28T21:31:12.449Z · LW · GW
Once you’re fully vaccinated the risk - including risk of post-viral fatigue - is in the range we normally consider tolerable.

Do you have a source for this? I've seen good data about hospitalization and risk of death, but nothing about long COVID. They probably correlate, but I've seen suggestive data that they correlate less than I'd intuitively expect.

It definitely doesn't feel like there's enough data to be confident in saying 'this is now a silly thing to care about or spend mental energy on'. Though I'd mostly agree if you live in an area with very low case counts.