Posts

Intentionally Making Close Friends 2021-06-27T23:06:49.269Z
Noticing and Overcoming Bias 2021-03-06T21:06:36.495Z
When you already know the answer - Using your Inner Simulator 2021-02-23T17:58:29.336Z
Overcoming Helplessness 2021-02-22T15:03:45.869Z
The World is Full of Wasted Motion 2021-02-02T19:02:48.724Z
Retrospective on Teaching Rationality Workshops 2021-01-03T17:15:00.479Z
Asking For Help 2020-12-27T11:32:32.462Z
On Reflection 2020-12-14T16:14:29.818Z
My case for starting blogging 2020-10-22T17:43:51.865Z
On Slack - Having room to be excited 2020-10-10T19:02:42.109Z
On Option Paralysis - The Thing You Actually Do 2020-10-03T11:50:57.070Z
Your Standards are Too High 2020-10-01T17:03:31.969Z
Learning how to learn 2020-09-30T16:50:19.356Z
Seek Upside Risk 2020-09-29T16:47:14.033Z
Macro-Procrastination 2020-09-28T16:07:48.670Z
Taking Social Initiative 2020-09-19T15:31:21.082Z
On Niceness: Looking for Positive Externalities 2020-09-14T18:03:12.196Z
Stop pressing the Try Harder button 2020-09-05T09:10:05.964Z
Helping people to solve their problems 2020-08-31T20:41:04.796Z
Meaningful Rest 2020-08-29T15:50:05.782Z
How to teach things well 2020-08-28T16:44:27.817Z
Live a life you feel excited about 2020-08-21T19:16:17.793Z
On Creativity - The joys of 5 minute timers 2020-08-18T06:26:55.493Z
On Systems - Living a life of zero willpower 2020-08-16T16:44:13.100Z
On Procrastination - The art of shaping your future actions 2020-08-01T10:22:44.450Z
What it means to optimise 2020-07-25T09:40:09.616Z
How to learn from conversations 2020-07-25T09:36:16.105Z
Taking the first step 2020-07-25T09:33:45.111Z
Become a person who Actually Does Things 2020-07-25T09:29:21.314Z
The Skill of Noticing Emotions 2020-06-04T17:48:28.782Z

Comments

Comment by Neel Nanda (neel-nanda-1) on Delta variant: we should probably be re-masking · 2021-07-25T12:00:15.141Z · LW · GW
Assume really long covid scales similarly to death and hospitalization

This doesn't at all feel obvious to me? At least, I'd put a decent (>20%) chance that this is not true. Eg Long COVID isn't that correlated with hospitalisation

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-07-24T14:10:17.155Z · LW · GW
It takes about 200 hours of investment in the space of a few months to move a stranger into being a good friend.

My guess is that this number varies a lot between people? I can think of multiple friendships that have felt exciting and close within maybe 3-4 one-on-one encounters of 2-4 hours each.

Comment by Neel Nanda (neel-nanda-1) on Ask Not "How Are You Doing?" · 2021-07-22T12:12:01.140Z · LW · GW

I like "what's the most exciting thing to happen to you recently?" as a replacement/follow-up to how are you - I find it often sparks interesting things

Comment by Neel Nanda (neel-nanda-1) on ($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? · 2021-07-22T12:02:51.239Z · LW · GW

it's likely that the UK will offer third doses of the original Pfizer vaccine to everyone over 50 or especially at risk, some time towards the end of this year.

Interesting, do you have a source for that?

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-07-12T20:46:49.088Z · LW · GW

Thanks, fixed. LessWrong has a bug where it doesn't like links which don't begin with https://

Comment by Neel Nanda (neel-nanda-1) on Announcing My Free Online Course "Original Seeing With a Focus On Life" · 2021-07-08T10:16:13.312Z · LW · GW

For me, clicking and dragging works to see later quotes. I found this unintuitive though

Comment by Neel Nanda (neel-nanda-1) on What's the effective R for the Delta variant of COVID-19? · 2021-07-03T04:56:23.111Z · LW · GW

Compared to the baseline of the US and Europe, as shown in the OWID source I linked

Comment by Neel Nanda (neel-nanda-1) on What's the effective R for the Delta variant of COVID-19? · 2021-07-02T14:58:04.478Z · LW · GW

The most relevant UK COVID policy (see full guidelines here):

Indoors gatherings are up to 6 people, outdoor gatherings are up to 30. The government is also distributing free lateral flow tests to everyone, though I'm not sure how high uptake is (I have seen very little marketing about this :( ). People are mostly still working from home, though some offices are slowly re-opening. Schools and universities are fully open.

Comment by Neel Nanda (neel-nanda-1) on What's the effective R for the Delta variant of COVID-19? · 2021-07-02T14:53:58.797Z · LW · GW

Vaccine hesitancy is surprisingly low in the UK (as a UK resident, I highly approve). See Our World In Data. Possible factors are that the NHS is trusted and popular, and our regulators have been generally more competent and a bit less risk averse (eg only pausing AstraZeneca for under 40s)

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-07-01T12:23:09.035Z · LW · GW

Ah, sorry, that sentence was badly worded. I completely agree that a friend reaching out is positive evidence. Though it's still not that strong - highly conscientious people are often good at reaching out to a lot of people, even if they're just doing this out of perceived social obligation. In general, people's bar for reaching out will vary wildly, and so the strength of evidence will vary a lot. You can probably tell whether your friend is unusually conscientious though

The point of that paragraph was that people often interpret the repeated absence of reaching out as strong negative evidence, which I strongly disagree with.

Comment by Neel Nanda (neel-nanda-1) on What precautions should fully-vaccinated people still be taking? · 2021-06-30T17:22:23.237Z · LW · GW

The [best source I've found] (https://institute.global/policy/hidden-pandemic-long-covid) finds a 30% reduction in P(Long COVID | infection after 2 vaccine doses). Infection reduction is about 85%, so total risk reduction is about 90%, MUCH less than the risk reduction for hospitalisation.

The study is based on 3,000 infected patients, all over 60, unclear how it generalises to younger people.

In general, there is SOME good quality research on long COVID, and it seems obvious to me that it is a legitimate thing and respects a good fraction of the harm of the pandemic. Even if overall research is much less high quality than I want.

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-06-29T09:29:35.033Z · LW · GW

All fair points. There are always risks, and always tail risks of super bad outcomes. But also the positive upside risk of excellent outcomes, and 'finding a new close friend' definitely qualifies as this for me. Ultimately, everything is a cost-benefit calculation. For me, this strategy has been overwhelmingly worth it, and I refuse to let the fear of tail risks close off such vast amounts of potential value. But it's hard to compare small probabilities of very bad or very good outcomes to each other, and maybe the opposite trade-off is correct for other people? Idk, my guess is that most people have a significant bias towards paranoia and risk-aversion, not the other way round. I'd also guess it depends on the social circles you move in, and the base rate for very bad outcomes. A party with friends-of-friends will probably have pretty different base rates to random strangers?

Vulnerability is not just an imaginary weakness that should be overcome; it may also point to something real.

Agreed! I'm arguing that most people have much higher barriers to being vulnerable than they should, and that many things that feel vulnerable to share really aren't that dangerous to share. That doesn't mean nothing vulnerability protects is worth protecting. Eg, sharing my deepest insecurities is a pretty bad idea, if the person can then turn around and use them to cause me a lot of pain.

My guess is that most people shy way too far away from being vulnerable, and being nudged towards 'just say fuck it and practice being vulnerable' will get them closer to the optimal amount of vulnerability. And that it's probably much harder to overshoot and end up too vulnerable, if you're already someone who has major issues with it.

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-06-29T09:14:19.495Z · LW · GW

Awesome! I also found that podcast episode super inspiring. Are there any techniques you're particularly excited to try?

Comment by Neel Nanda (neel-nanda-1) on Intentionally Making Close Friends · 2021-06-29T09:13:42.725Z · LW · GW

Ah, I didn't notice the paywall. Thanks for collecting those!

I also like Spencer Greenberg's Life-Changing Questions and Askhole (again, most of these are unsuitable, but there are some gems in there)

Comment by Neel Nanda (neel-nanda-1) on What precautions should fully-vaccinated people still be taking? · 2021-06-28T22:12:52.169Z · LW · GW
It is obviously correct to wear a mask only if you do not have access to a respirator or PAPR.

Sure, I'd agree with this. Things like N95s and P100s are much better than cloth or surgical masks.

Comment by Neel Nanda (neel-nanda-1) on What precautions should fully-vaccinated people still be taking? · 2021-06-28T21:31:12.449Z · LW · GW
Once you’re fully vaccinated the risk - including risk of post-viral fatigue - is in the range we normally consider tolerable.

Do you have a source for this? I've seen good data about hospitalization and risk of death, but nothing about long COVID. They probably correlate, but I've seen suggestive data that they correlate less than I'd intuitively expect.

It definitely doesn't feel like there's enough data to be confident in saying 'this is now a silly thing to care about or spend mental energy on'. Though I'd mostly agree if you live in an area with very low case counts.

Comment by Neel Nanda (neel-nanda-1) on What precautions should fully-vaccinated people still be taking? · 2021-06-28T11:47:02.809Z · LW · GW
Masks probably don't work against the variants (masks wiped out the flu but not the massive fall/winter covid wave).

This seems like the wrong inference. The R0 of flu is something like 1.2, the R0 of Alpha was about 4 (at pre-COVID levels of social distancing). 'Masks work' looks like masks reducing R0 by some factor. If this reduces R0 to below 1, it wipes out the disease, if it remains above 1 you will still get a massive wave. Because the R0 of flu is so much lower, 'flu was wiped out but COVID wasn't' is approximately 0 evidence about the effectiveness of masks.

For example, this paper found a 25% reduction in R0 from universal mask wearing. That would reduce flu to 0.9 and wipe it out, but reduce Alpha to 3, which is still very virulent. Yet, it is still obviously correct to wear a mask

Comment by Neel Nanda (neel-nanda-1) on Empirical Observations of Objective Robustness Failures · 2021-06-24T10:23:45.290Z · LW · GW

This seems like really great work, nice job! I'd be excited to see more empirical work around inner alignment.

One of the things I really like about this work is the cute videos that clearly demonstrate 'this agent is doing dumb stuff because its objective is non-robust'. Have you considered putting shorter clips of some of the best bits to Youtube, or making GIFs? (Eg, a 5-10 second clip of the CoinRun agent during train, followed by a 5-10 second clip of the CoinRun agent during test). It seemed that one of the major strengths of the CoastRunners clip was how easily shareable and funny it was, and I could imagine this research getting more exposure if it's easier to share highlights. I found the Google Drive pretty hard to navigate

Comment by Neel Nanda (neel-nanda-1) on Irrational Modesty · 2021-06-24T10:11:14.830Z · LW · GW

Seconded, that line really hit home for me

Comment by Neel Nanda (neel-nanda-1) on The Point of Trade · 2021-06-22T18:12:21.490Z · LW · GW

My guess for missing things:

Economies of scale - it's probably easier to produce a lot of steel from a lot of iron, per unit kg of steel, than with a little bit of iron. So you want there to be concentration of raw materials.

Diminishing marginal returns - so this pushes towards a uniform distribution of everything

Comment by Neel Nanda (neel-nanda-1) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-21T08:45:08.363Z · LW · GW
But the global chip shortage means semiconductor foundries like Taiwan Semiconductor Manufacturing Co. are already scrambling to fill other orders. They are also cautious about adding new capacity given how finicky crypto demand has proven to be. Bernstein estimates that crypto will only contribute about 1% TSMC’s revenue this year, versus around 10% in the first half of 2018 during the last crypto boom.

Looking at the WSJ source, looks like it's actually arguing that Bitcoin mining wasn't a big cause of the global chip shortage. And that 1% was a low, and that it had previously been 10%.

Still less than I'd expected, but 10% seems plausibly enough to significantly boost profits?

Comment by Neel Nanda (neel-nanda-1) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-07T12:01:07.064Z · LW · GW
I think Vicarious AI is doing more AGI-relevant work than anyone

Interesting, can you say more about this/point me to any good resources on their work? I never hear about Vicarious in AI discussions

Comment by Neel Nanda (neel-nanda-1) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-07T12:00:19.782Z · LW · GW

One approach that feels a bit more direct is investing in semiconductor stocks. If we expect AGI to be a big deal and massively economically relevant, it seems likely that this will involve vast amounts of compute, and thus need a lot of computer chips. I believe ASML (Netherlands based) and TSMC (Taiwan based) are two of the largest semiconductor manufacturers and are publicly traded, though I'm unsure which countries let you easily invest in them.

Problems with this:

  • A bunch of their current business comes from crypto-mining, so this also has some crypto exposure. The stocks have done well over the last few years, and I believe this is mostly from the crypto boom than the AI boom
  • TSMC is based in Taiwan, and thus is exposed to Taiwan-China problems
  • This assumes AGI will require a lot of compute (which I personally believe, but YMMV)
  • It's unclear how much of the value of AGI will be captured by semiconductor manufacturers
Comment by Neel Nanda (neel-nanda-1) on The Alignment Forum should have more transparent membership standards · 2021-06-06T22:00:43.175Z · LW · GW

A similar bug - when I go to the AF, the top right says Log In, then has a Sign Up option, and leads me through the standard sign-up process. Given that it's invite only, seems like it should tell people this, and redirect them to make a LW account?

Comment by Neel Nanda (neel-nanda-1) on The Alignment Forum should have more transparent membership standards · 2021-06-06T21:53:50.475Z · LW · GW
I agree that most people don't read the manual, but I think that if you're confused about something and then don't read the manual, it's on you.

I think responsibility is the wrong framing here? There are empirical questions of 'what proportion of users will try engaging with the software?', 'how many users will feel confused?', 'how many users will be frustrated and quit/leave with a bad impression?'. I think the Alignment Forum should be (in part) designed with these questions in mind. If there's a post on the front page that people 'could' think to read, but in practice don't, then I think this matters.

I also don't think they could make it much more obvious than being always on the front page.

I disagree. I think the right way to do user interfaces is to present the relevant information to the user at the appropriate time. Eg, when they try to sign-up, give a pop-up explaining how that process works (or linking to the relevant part of the FAQ). Ditto when they try making a comment, or making a post. I expect this would exposure many more users to the right information at the right time, rather than needing them to think to look at the stickied post, and filter through for the information they want

Comment by Neel Nanda (neel-nanda-1) on The Alignment Forum should have more transparent membership standards · 2021-06-05T20:53:46.569Z · LW · GW

I think most people just don't read the manual? And I think good user interfaces don't assume they do

Speaking personally, I'm an alignment forum member, read a bunch of posts on there, but never even noticed that post existed

Comment by Neel Nanda (neel-nanda-1) on The Alignment Forum should have more transparent membership standards · 2021-06-05T07:32:43.666Z · LW · GW

Hmm, fair point. I feel concerned at how illegible that is though, especially to an academic outsider who wants to engage but lacks context on LW. Eg, I've been using AF for a while, and wasn't aware that comments were regularly promoted from LW. And if we're talking about perception of the field, I think surface level impressions like this are super important

Comment by Neel Nanda (neel-nanda-1) on The Alignment Forum should have more transparent membership standards · 2021-06-04T18:37:07.439Z · LW · GW

And the field overall also has vastly more of its discussion public than almost any academic field I can think of and can easily be responded to by researchers from a broad variety of fields

What do you mean by this? I imagine the default experience of a researcher who wants to respond to some research but has minimal prior exposure to the community, is to be linked to the Alignment Forum, try to comment, and not be able to. I expect commenting on LessWrong to be non obvious as a thing to do, and to feel low-status/not like having a real academic discussion

Comment by Neel Nanda (neel-nanda-1) on What is the Risk of Long Covid after Vaccination? · 2021-05-31T19:52:51.912Z · LW · GW

The risk of death from covid after vaccination is near zero and this seems to be the case despite the variants

This seems to be true, but this doesn't obviously imply the risk of long COVID is significantly decreased. As far as I'm aware, no one has really studied this. On priors I'd guess that vaccines help a bunch, but I don't understand what's going on here very well.

And I think this is an important question, long COVID seems to represent a lot of the harm of COVID to young people. If case rates in your area aren't that low, this definitely seems like a valid question to ask

Comment by Neel Nanda (neel-nanda-1) on [AN #149]: The newsletter's editorial policy · 2021-05-10T22:17:50.930Z · LW · GW
One or two people suggested adding links to interesting papers that I wouldn't have time to summarize. I actually used to do this when the newsletter first started, but it seemed like no one was clicking on those links so I stopped doing that. I'm pretty sure that would still be the case now so I'm not planning to restart that practice.

A possible experiment: Frame this as a 'request for summaries', link to the papers you won't get round to, but offer to publish any sufficiently good summaries of those papers that someone sends you in a future newsletter.

Also, damn! I really like the long summaries, and would be sad to see them go (though obviously you should listen to a survey of 66 people over my opinion)

Comment by Neel Nanda (neel-nanda-1) on Less Realistic Tales of Doom · 2021-05-07T21:07:36.214Z · LW · GW

I thoroughly enjoyed this post. Thanks! I particularly loved the twist in the Y2.1K bug

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-05-01T19:05:06.476Z · LW · GW
It's not exactly clear what you do with such a story or what the upside is, it's kind of a vague theory of change and most people have some specific theory of change they are more excited about (even if this kind of story is a bit of a public good that's useful on a broader variety of perspectives / to people who are skeptical).

Ah, interesting! I'm surprised to hear that. I was under the impression that while many researchers had a specific theory of change, it was often motivated by an underlying threat model, and that different threat models lead to different research interests.

Eg, someone worries about a future where AI control the world but are not human comprehensible, feels very different from someone worried about a world where we produce an expected utility maximiser that has a subtly incorrect objective, resulting in bad convergent instrumental goals.

Do you think this is a bad model of how researchers think? Or are you, eg, arguing that having a detailed, concrete story isn't important here, just the vague intuition for how AI goes wrong?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-29T12:40:14.073Z · LW · GW

What's the engine game?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:52:23.540Z · LW · GW

What research in the past 5 years has felt like the most significant progress on the alignment problem? Has any of it made you more or less optimistic about how easy the alignment problem will be?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:49:55.979Z · LW · GW

Do you have any advice for junior alignment researchers? In particular, what do you think are the skills and traits that make someone an excellent alignment researcher? And what do you think someone can do early in a research career to be more likely to become an excellent alignment researcher?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:44:22.702Z · LW · GW

What is your theory of change for the Alignment Research Center? That is, what are the concrete pathways by which you expect the work done there to systematically lead to a better future?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:41:29.759Z · LW · GW

There has been surprisingly little written on concrete threat models for how AI leads to existential catastrophes (though you've done some great work rectifying this!). Why is this? And what are the most compelling threat models that don't have good public write-ups? In particular, are there under-appreciated threat models that would lead to very different research priorities within Alignment?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:39:04.368Z · LW · GW

Pre-hindsight: 100 years from now, it is clear that your research has been net bad for the long-term future. What happened?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:38:14.165Z · LW · GW

You seem in the unusual position of having done excellent conceptual alignment work (eg with IDA), and excellent applied alignment work at OpenAI, which I'd expect to be pretty different skillsets. How did you end up doing both? And how useful have you found ML experience for doing good conceptual work, and vice versa?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:36:20.700Z · LW · GW

What are the most important ideas floating around in alignment research that don't yet have a public write-up? (Or, even better, that have a public write-up but could do with a good one?)

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:34:55.812Z · LW · GW

You gave a great talk on the AI Alignment Landscape 2 years ago. What would you change if giving the same talk today?

Comment by Neel Nanda (neel-nanda-1) on AMA: Paul Christiano, alignment researcher · 2021-04-28T21:33:59.629Z · LW · GW

What are the highest priority things (by your lights) in Alignment that nobody is currently seriously working on?

Comment by Neel Nanda (neel-nanda-1) on [Linkpost] Treacherous turns in the wild · 2021-04-27T17:23:06.064Z · LW · GW

In a real turn, you don't get this kind of warning.

I disagree, I think that toy results like this are exactly the kind of warning we'd expect to see.

You might not get a warning shot from a superintelligence, but it seems great to collect examples like this of warning shots from systems dumber - if there's going to be continuous takeoff, and there's going to be a treacherous turn eventually, it seems like a great way to get people to take treacherous turns seriously is to watch closely for failed examples (though hopefully ones more sophisticated than this!)

Comment by Neel Nanda (neel-nanda-1) on What are fun little puzzles / games / exercises to learn interesting concepts? · 2021-03-18T07:58:55.128Z · LW · GW

The Clearer Thinking Calibrate Your Judgement tool seems worth checking out.

https://www.clearerthinking.org/post/2019/10/16/practice-making-accurate-predictions-with-our-new-tool

Comment by Neel Nanda (neel-nanda-1) on Strong Evidence is Common · 2021-03-16T11:09:48.579Z · LW · GW

I really like this post! I have a concerned intuition around 'sure, the first example in this post seems legit, but I don't think this should actually update anything in my worldview, for the real-life situations where I actively think about Bayes Rule + epistemics'. And I definitely don't agree with your example about top 1% traders. My attempt to put this into words:

1. Strong evidence is rarely independent. Hearing you say 'my name is Mark' to person A might be 20,000:1 odds, but hearing you then say it to person B is like 10:1 tops. Most hypotheses that explain the first event well, also explain the second event well. So while the first sample contains the most information, the second sample contains way less. Making this idea much less exciting.

It's much easier to get to middling probabilities than high probabilities. This makes sense, because I'm only going to explicitly consider the odds of <100 hypotheses for most questions, so a hypothesis with say <1% probability isn't likely to be worth thinking about. But to get to 99% it needs to defeat all of the other ones too

Eg, in the 'top 1% of traders' example, it might be easy to be confident I'm above the 90th percentile, but much harder to move beyond that.

2. This gets much messier when I'm facing an adversarial process. If you say 'my name is Mark Xu, want to bet about what's on my driver's license' this is much worse evidence because I now face adverse selection. Many real-world problems I care about involve other people applying optimisation pressure to shape the evidence I see, and some of this involves adversarial potential. The world does not tend to involve people trying to deceive me about world capitals.

An adversarial process could be someone else trying to trick me, but it could also be a cognitive bias I have, eg 'I want to believe that I am an awesome, well-calibrated person'. It could also be selection bias - what is the process that generated the evidence I see?

3. Some questions have obvious answers, others don't. The questions most worth thinking about are rarely the ones that are obvious. The ones where I can access strong evidence easily are much less likely to be worth thinking about. If someone disagrees with me, that's at least weak evidence against the existence of strong evidence.

Comment by Neel Nanda (neel-nanda-1) on Mentorship, Management, and Mysterious Old Wizards · 2021-02-27T08:05:52.909Z · LW · GW

+1 I went a CFAR camp for high schoolers a few years ago, and the idea that I can be ambitious and actually fix problems in my life was BY FAR the biggest takeaway I got (and one of the most valuable life lessons I ever learned)

Comment by Neel Nanda (neel-nanda-1) on When you already know the answer - Using your Inner Simulator · 2021-02-24T08:51:57.931Z · LW · GW

As a single point of anecdata, I personally am fairly prone to negative thoughts and self-blame, and find this super helpful for overcoming that. My Inner Simulator seems to be much better grounded than my spirals of anxiety, and not prone to the same biases.

Some examples:

I'm stressing out about a tiny mistake I made, and am afraid that a friend of mine will blame me for it. So I simulate having the friend find out and get angry with me about it, and ask myself 'am I surprised at this outcome'. And discover that yes, I am very surprised by this outcome - that would be completely out of character and would feel unreasonable to me in the moment.

I have an upcoming conversation with someone new and interesting, and I'm feeling insecure about my ability to make good first impressions. I simulate the conversation happening, and leaving feeling like it went super well, and check how surprised I feel. And discover that I don't feel surprised, that in fact that this happens reasonably often.

Such a person could also come up with a way they could improve their life, fail to implement it, and then feel guilty when their reality fails to measure up to their imagined future. 

This seems like a potentially fair point. I sometimes encounter this problem. Though I find that my Inner Sim is a fair bit better calibrated about what solutions might actually work. Eg it has a much better sense for 'I'll just procrastinate and forget about this'. On balance, I find that the benefits of 'sometimes having a great idea that works' + the motivation to implement it far outweighs this failure mode, but your mileage may vary.

Comment by Neel Nanda (neel-nanda-1) on When you already know the answer - Using your Inner Simulator · 2021-02-24T08:40:19.547Z · LW · GW

Nice, I really like the approach of 'write up a concrete question -> assume I received a helpful answer -> let my inner sim fill in the blanks about what it says'

Comment by Neel Nanda (neel-nanda-1) on Anti-Aging: State of the Art · 2021-02-12T08:36:16.494Z · LW · GW

Ooh, no. That's super interesting, thanks!

Comment by Neel Nanda (neel-nanda-1) on Anti-Aging: State of the Art · 2021-02-02T10:18:22.738Z · LW · GW
How would writing the question help to convince people? Would it not only be convincing in 5-10 years' time if some of the predictions turn out to be accurate? Or, do you think if consensus on a Metaculus question that prediction X will occur is in and of itself convincing for rationalists? 

I would personally find a consensus on Metaculus pretty convincing (at least, conditional on there being a significant amount of predictions for the question). I find it hard to gauge other people's expertise and how much to defer to them, especially when I just see their point of view. Aggregating many people's predictions is much more persuasive to me, and many of the top Metaculus predictors seem to have good epistemics.