Posts

If brains are computers, what kind of computers are they? (Dennett transcript) 2020-01-30T05:07:00.345Z · score: 37 (17 votes)
2018 Review: Voting Results! 2020-01-24T02:00:34.656Z · score: 128 (32 votes)
10 posts I like in the 2018 Review 2020-01-11T02:23:09.184Z · score: 34 (8 votes)
Voting Phase of 2018 LW Review 2020-01-08T03:35:27.204Z · score: 58 (13 votes)
(Feedback Request) Quadratic voting for the 2018 Review 2019-12-20T22:59:07.178Z · score: 37 (11 votes)
[Review] Meta-Honesty (Ben Pace, Dec 2019) 2019-12-10T00:37:43.561Z · score: 30 (9 votes)
[Review] On the Chatham House Rule (Ben Pace, Dec 2019) 2019-12-10T00:24:57.206Z · score: 43 (13 votes)
The Review Phase 2019-12-09T00:54:28.514Z · score: 58 (16 votes)
The Lesson To Unlearn 2019-12-08T00:50:47.882Z · score: 39 (12 votes)
Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) 2019-12-02T03:45:56.870Z · score: 41 (12 votes)
Useful Does Not Mean Secure 2019-11-30T02:05:14.305Z · score: 49 (14 votes)
AI Alignment Research Overview (by Jacob Steinhardt) 2019-11-06T19:24:50.240Z · score: 44 (9 votes)
How feasible is long-range forecasting? 2019-10-10T22:11:58.309Z · score: 43 (12 votes)
AI Alignment Writing Day Roundup #2 2019-10-07T23:36:36.307Z · score: 35 (9 votes)
Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More 2019-10-04T04:08:49.942Z · score: 172 (63 votes)
Follow-Up to Petrov Day, 2019 2019-09-27T23:47:15.738Z · score: 83 (27 votes)
Honoring Petrov Day on LessWrong, in 2019 2019-09-26T09:10:27.783Z · score: 138 (52 votes)
SSC Meetups Everywhere: Salt Lake City, UT 2019-09-14T06:37:12.296Z · score: 0 (0 votes)
SSC Meetups Everywhere: San Diego, CA 2019-09-14T06:34:33.492Z · score: 0 (0 votes)
SSC Meetups Everywhere: San Jose, CA 2019-09-14T06:31:06.068Z · score: 0 (0 votes)
SSC Meetups Everywhere: San José, Costa Rica 2019-09-14T06:25:45.112Z · score: 0 (0 votes)
SSC Meetups Everywhere: São José dos Campos, Brazil 2019-09-14T06:18:23.523Z · score: 0 (0 votes)
SSC Meetups Everywhere: Seattle, WA 2019-09-14T06:13:06.891Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Seoul, South Korea 2019-09-14T06:08:26.697Z · score: 0 (0 votes)
SSC Meetups Everywhere: Sydney, Australia 2019-09-14T05:53:45.606Z · score: 0 (0 votes)
SSC Meetups Everywhere: Tampa, FL 2019-09-14T05:49:31.139Z · score: 0 (0 votes)
SSC Meetups Everywhere: Toronto, Canada 2019-09-14T05:45:15.696Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Vancouver, Canada 2019-09-14T05:39:25.503Z · score: 0 (0 votes)
SSC Meetups Everywhere: Victoria, BC, Canada 2019-09-14T05:34:40.937Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Vienna, Austria 2019-09-14T05:27:31.640Z · score: 2 (2 votes)
SSC Meetups Everywhere: Warsaw, Poland 2019-09-14T05:24:16.061Z · score: 0 (0 votes)
SSC Meetups Everywhere: Wellington, New Zealand 2019-09-14T05:17:28.055Z · score: 0 (0 votes)
SSC Meetups Everywhere: West Lafayette, IN 2019-09-14T05:11:28.211Z · score: 0 (0 votes)
SSC Meetups Everywhere: Zurich, Switzerland 2019-09-14T05:03:43.295Z · score: 0 (0 votes)
Rationality Exercises Prize of September 2019 ($1,000) 2019-09-11T00:19:51.488Z · score: 90 (25 votes)
Stories About Progress 2019-09-08T23:07:10.443Z · score: 32 (10 votes)
Political Violence and Distraction Theories 2019-09-06T20:21:23.801Z · score: 19 (8 votes)
Stories About Education 2019-09-04T19:53:47.637Z · score: 42 (17 votes)
Stories About Academia 2019-09-02T18:40:00.106Z · score: 33 (21 votes)
Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories 2019-08-31T02:44:16.833Z · score: 72 (30 votes)
AI Alignment Writing Day Roundup #1 2019-08-30T01:26:05.485Z · score: 34 (14 votes)
Why so much variance in human intelligence? 2019-08-22T22:36:55.499Z · score: 56 (21 votes)
Announcement: Writing Day Today (Thursday) 2019-08-22T04:48:38.086Z · score: 32 (12 votes)
"Can We Survive Technology" by von Neumann 2019-08-18T18:58:54.929Z · score: 35 (11 votes)
A Key Power of the President is to Coordinate the Execution of Existing Concrete Plans 2019-07-16T05:06:50.397Z · score: 117 (36 votes)
Bystander effect false? 2019-07-12T06:30:02.277Z · score: 19 (10 votes)
The Hacker Learns to Trust 2019-06-22T00:27:55.298Z · score: 81 (24 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 109 (63 votes)
Von Neumann’s critique of automata theory and logic in computer science 2019-05-26T04:14:24.509Z · score: 30 (11 votes)
Ed Boyden on the State of Science 2019-05-13T01:54:37.835Z · score: 64 (16 votes)

Comments

Comment by benito on Blog Post Day (Unofficial) · 2020-02-18T20:44:10.264Z · score: 3 (2 votes) · LW · GW

Sure thing, I'll write a blogpost that day.

Comment by benito on What Money Cannot Buy · 2020-02-03T23:52:58.235Z · score: 6 (3 votes) · LW · GW

This seems like a very important point to me, I'm glad it's been written down clearly and concretely. Curated.

Comment by benito on AI Alignment 2018-19 Review · 2020-02-02T02:06:46.976Z · score: 9 (4 votes) · LW · GW

Curated. This sort of review work is crucial for making common records of what progress has been made, so thank you for putting in the work to make it.

Comment by benito on REVISED: A drowning child is hard to find · 2020-02-01T08:12:13.175Z · score: 7 (3 votes) · LW · GW

I'm a little worried that by not being loud enough with the caveats, the EA movement's "discourse algorithm" (the collective generalization of "cognitive algorithm") might be accidentally running a distributed motte-and-bailey, where the bailey is "You are literally responsible for the death of another human being if you don't donate $5000" and the motte is "The $5000 estimate is plausible, and it's a really important message to get people thinking about ethics and how they want to contribute."

I initially wrote a comment engaging with this, I thought that was one of the primary things Ben was trying to talk about in the post, but then Oli persuaded me Ben was just arguing that the cost-effectiveness estimates were false / a lie, so I removed the comment. I'd appreciate an explicit comment on how much this is one of the primary things Ben is trying to say with the essay.

Comment by benito on REVISED: A drowning child is hard to find · 2020-02-01T05:36:13.535Z · score: 5 (3 votes) · LW · GW

He then argues that the world does not look like it actually has $50B of life-saving opportunities for $5k a piece lying around

This point seems like it's doing a lot of the work, and I'm honestly uncertain. I can imagine it going either way - for example I can imagine the average life saved being very cheap when you're taking advantages of things at scale.

So it seems like a crux whether Gates Foundation's cost effectiveness is comparably low relative to $5k (GiveWell's suggested cost-effectiveness estimate). If it seems higher, then something is going wrong. Oli and I looked into two cases, here's some partial work:

  • Gates Foundation spent $1.3 billion on malaria in 2012. For that to be beating GiveWell cost effectiveness estimates, it would have to beat $5k/person saved, which would be 260k people. This is not obviously implausible, given that ~500k people died of malaria that year. This would overall mean they'd have to have reduced malaria instances by around 30%, which seems massive but not implausible.
  • Measles has stayed around constant since 2010, around 300-400k deaths per year. It seems like Gates might have put 100s of millions in, which means for GiveWell's recommendations to beat Gates' cost effectiveness, measles cases would have had to counterfactually double in that time period, or something like this, which seems somewhat unlikely.
    • However, I think that Gates was trying to 'kill off' measles, which has a large returns in the long term, so it's not obvious they shouldn't spend a lot of money on high variance bets to maximise coverage of measles vaccines.
Comment by benito on how has this forum changed your life? · 2020-01-31T07:37:41.981Z · score: 6 (3 votes) · LW · GW

Related: Brienne wrote a really interesting comment about this broader dynamic in journalists and popular, about what stories are available for a writer to tell.

Comment by benito on If brains are computers, what kind of computers are they? (Dennett transcript) · 2020-01-31T07:00:01.621Z · score: 2 (1 votes) · LW · GW

Cheers :-)

Comment by benito on how has this forum changed your life? · 2020-01-31T01:43:50.788Z · score: 42 (11 votes) · LW · GW

I have some difficulty distinguishing personal growth I've experienced due to the culture on LessWrong with that from other parts of society and culture and myself. But here's some things feel substantially downstream of interacting with the ideas and culture in this small intellectual community.

(I imagine others will give very different answers.)

Help me focus more on what I care about, and less on what people and society expect of me.

  • I'm a better classical musician. These days I'm better able to do deliberate practise on the parts of the music I need to improve at. To give a dumb/oversimplified quantitative measure, I'm able to learn-and-memorise pieces of music maybe 5-10x more efficiently. When I was at music school as a teenager, there were pieces of music I liked that I didn't finish memorising for years, because when I was in the practise room I was 'going through the motions' of practise far more than 'actually trying to get better according to my own taste'. In the past weeks and months I've picked up a dozen or so pieces by Bach and others in maybe 5-10 hours of playing each, have memorised each, and am able to play with them more musically and emotionally than before.
  • I did weird things during my undergrad at Oxford that were better for my career than being 'a good student'. The university wanted me to care about things like academic prestige and grades in all of my classes, but I realise that I wasn't very interested in the goals they had for me. The academic setting rarely encouraged genuine curiosity about math and science, and felt fairly suffocating. I focused on finding interesting people and working on side-projects I was excited about, and ended up doing things I think in retrospect were far more valuable for my career than getting good grades.

Help me think about modern technology clearly and practically.

  • Quit social media. Writings and actions by people in the LessWrong intellectual community have helped me more than other public dialogue on the subject, think about how to interact with social media. Zvi (author of many LessWrong sequences) did some very simple experiments on the Facebook newsfeed, and wrote about his experiences with it, in a way that helped me think of facebook as an actively adversarial force, optimised to get me hooked, and fundamentally not something we can build a healthy community on. I found his two simple experiments more informative than anything I've seen come out of academia on the subject. The fact that he quit Facebook cold-turkey without exception, and a few more friends, has caused me to move off it too. I now view all social media on Saturdays in a 2-hour period, and don't write/react on any of it, and think this has been very healthy.
  • Using google docs for meetings. Generally this community has helped me think better using modern technology. One user wrote this post about social modelling, which advised using google docs to have meetings. At work now I regularly, primarily have the meeting conversation inside a google doc, where 3-5 people in a meeting can have many parallel conversations at once. I've personally found this really valuable, both in allowing us to use the time more effectively (rather than one person talking at a time, 5 of us can be writing in different parts of the document at the same time), but also in producing a record of our thought processes, reasoning and decisions, for us to share with others and reflect on months and years down the line.

Help me figure out who I am and build my life.

  • Take bets. I take bets on things regularly. That's a virtue and something respected in this intellectual community. I get to find out I'm wrong / prove other people wrong, and generally move conversations forward.
  • Avoid politics. Overall I think that I've successfully avoided getting involved in politics or building a political identity throughout my teenage and university years, and focused on learning what real knowledge is in areas of science and other practical matters, which I think has been healthy. I have a sense that this means when I will have to build up my understanding of more political domains, I'll be able to keep a better sense of what is true and what is convenient narrative. This is partly due to other factors (e.g. personal taste), but has been aided by the LessWrongian dislike of politics.
  • Learn to trust better. Something about the intellectual honesty and rigour of the LessWrong intellectual community has helped me learn to 'kill your darlings', that just because I respect someone doesn't mean they're infallible. The correct answer isn't to always trust someone, or to never trust someone, but to build up an understanding of when they are trustworthy and when they aren't. (This post says that idea fairly clearly in a way I found helpful.)
  • Lots of other practises. I learned to think carefully from reading some of the fiction and stories written by LessWrongers. A common example is the Harry Potter fanfiction "Harry Potter and the Methods of Rationality", which communicates a lot of the experiences of someone who lives up to the many virtues we care about on LessWrong. There are lots of experiences I could write about, about empirically testing your beliefs (including your political beliefs), being curious about how the world works, taking responsibility, and thinking for yourself. I have more I could say here, but it would take me a while to say it while avoiding spoilers. Nonetheless, it's had a substantial effect on how I live and work and collaborate with other people.
    • Other people write great things too, I won't try to find all of them. This recent one by Scott Alexander I think about a fair amount.

I guess there's a ton of things, the above are just a couple of examples that occurred to me in the ~30 mins I spent writing this answer.

By the way, while we care about understanding the human mind in a very concrete way on LessWrong, we are more often focused on an academic pursuit of knowledge. We recently did a community vote on the best posts from 2018. If you look at the top 10-20 or so post, as well as a bunch of niche posts about machine learning and AI, you'll see the sort of discussion we tend to have best on LessWrong. I don't come here to get 'life-improvements' or 'self-help', I come here much more to be part of a small intellectual community that's very curious about human rationality.

Comment by benito on If brains are computers, what kind of computers are they? (Dennett transcript) · 2020-01-30T05:11:29.279Z · score: 2 (1 votes) · LW · GW

Johnswentworth, I'm interested in your reaction / thoughts on the above. Feels related to a lot of things you've been talking about.

Comment by benito on 2018 Review: Voting Results! · 2020-01-25T01:28:22.055Z · score: 6 (3 votes) · LW · GW

The whole point is that it does give you more points, I think.

Comment by benito on 2018 Review: Voting Results! · 2020-01-25T01:05:45.278Z · score: 4 (2 votes) · LW · GW

I can imagine, similar to how we have a button for 're-order the posts', we could have a button for 'normalise my votes'.

Comment by benito on 2018 Review: Voting Results! · 2020-01-24T17:48:34.268Z · score: 7 (4 votes) · LW · GW

It was 1st June 2018 that we built strong/weak upvotes - before then you had to always vote your max strength. I could imagine that being responsible for the apparent info-cascades in very popular post.

Comment by benito on 2018 Review: Voting Results! · 2020-01-24T09:20:01.107Z · score: 5 (3 votes) · LW · GW

Yeah! I also noticed this when looking over the results; there was a paragraph on it in the OP that I cut.

Comment by benito on 2018 Review: Voting Results! · 2020-01-24T03:56:32.504Z · score: 4 (2 votes) · LW · GW

You're quite right, fixed :)

Comment by benito on Reason and Intuition in science · 2020-01-23T23:09:42.137Z · score: 2 (1 votes) · LW · GW

In this post the author gives someone's real name and claim that they're the author of the quoted paragraph. We got an intercom message from a user claiming to be that person, asking to remove the post given that (a) the post provides no evidence of the association, (b) they say this association is harmful to them, and (c) it now shows up as the fifth result on google when searching for their name.

Doxxing attempts, whether true or false, are pretty bad, and I do think that LW's SEO is giving this claim more Google prominence even though the post provides no evidence for the claim. I think in this case I will edit any mentions of the person's name here to be the rot-13'd version of the name. You can access the name via entering it into the website rot13.com, but it will not be highly searchable on Google.

Comment by benito on Coherent behaviour in the real world is an incoherent concept · 2020-01-22T08:33:16.170Z · score: 4 (2 votes) · LW · GW

Just a note that in the link that Wei Dai provides for "Relevant powerful agents will be highly optimized", Eliezer explicitly assigns '75%' to 'The probability that an agent that is cognitively powerful enough to be relevant to existential outcomes, will have been subject to strong, general optimization pressures.'

even if he doesn't it seems like a common implicit belief in the rationalist AI safety crowd and should be debunked anyway.

Agreed.

Comment by benito on Go F*** Someone · 2020-01-20T22:06:02.504Z · score: 12 (6 votes) · LW · GW

Not offering a general opinion here right now, but I want to briefly respond to the particular phrasing of:

"Given that there is a wide variety of readers, are we sufficiently sure that this will not needlessly offend or upset some of them?"

As stated, this is far too costly of a standard. This is the internet, where an incredible magnitude of people can see your content, all with very idiosyncratic feelings and life stories, and the amount of work required to ensure zero readers will feel offended or upset is overwhelming and silencing.

Comment by benito on Voting Phase of 2018 LW Review · 2020-01-19T00:00:31.352Z · score: 2 (1 votes) · LW · GW

Woop! I did the same yesterday.

Comment by benito on Reality-Revealing and Reality-Masking Puzzles · 2020-01-18T13:08:29.650Z · score: 27 (12 votes) · LW · GW

I think that losing your faith in civilization adequacy does feel more like a deconversion experience. All your safety nets are falling, and I cannot promise you that we'll replace them all. The power that 'made things okay' is gone from the world.

Comment by benito on Reality-Revealing and Reality-Masking Puzzles · 2020-01-18T06:38:36.746Z · score: 11 (3 votes) · LW · GW

Hm, what caused them? I'm not sure exactly, but I will riff on it for a bit anyway.

Why was I uninterested in hanging out with most people? There was something I cared about quite deeply, and it felt feasible that I could get it, but it seemed transparent that these people couldn't recognise it or help me get it and I was just humouring them to pretend otherwise. I felt kinda lost at sea, and so trying to understand and really integrate others' worldviews when my own felt unstable was... it felt like failure. Nowadays I feel stable in my ability to think and figure out what I believe about the world, and so I'm able to use other people as valuable hypothesis generation, and play with ideas together safely. I feel comfortable adding ideas to my wheelhouse that aren't perfectly vetted, because I trust overall I'm heading in a good direction and will be able to recognise their issues later.

I think that giving friends a life-presentation and then later noticing a clear hole in it felt really good, it felt like thinking for myself, putting in work, and getting out some real self-knowledge about my own cognitive processes. I think that gave me more confidence to interact with others' ideas and yet trust I'd stay on the right track. I think writing my ideas down into blogposts also helped a lot with this.

Generally building up an understanding of the world that seemed to actually be right, and work for making stuff, and people I respected trusted, helped a lot. 

That's what I got right now.

Oh, there was another key thing tied up with the above: feeling like I was in control of my future. I was terrible at being a 'good student', yet I thought that my career depended on doing well at university. This lead to a lot of motivated reasoning and a perpetual fear that made it hard to explore, and gave me a lot of tunnel vision throughout my life at the time. Only when I realised I could get work that didn't rely on good grades at university, but instead on trust I had built in the rationality and EA networks, and I could do things I cared about like work on LessWrong, did I feel more relaxed about considering exploring other big changes I wanted in how I lived my life, and doing things I enjoyed.

A lot of these worries felt like I was waiting to fix a problem - a problem whose solution I could reach, at least in principle - and then the worry would go away. This is why I said 'transitional'. I felt like the problems could be overcome.

Comment by benito on Bay Solstice 2019 Retrospective · 2020-01-18T02:48:14.961Z · score: 7 (3 votes) · LW · GW

You know,

About 2-3 months earlier, I chatted with Eliezer at a party. Afterward, on the drive home, I said to my friend

"Gosh, Eliezer looks awfully spindly. It looks like he's lost weight, but it's all gone from his face and his arms."

I was starting to make all these updates about how it doesn't look good to lose weight when you're hitting 40, and that it's important to lose weight early, and so on.

I told this to Eliezer later. He said I got points for noticing my confusion, which I was pleased about.

Comment by benito on Moloch Hasn’t Won · 2020-01-18T02:40:26.511Z · score: 4 (2 votes) · LW · GW

I'm reading Jameson as just saying that, from an editing standpoint, the wording was sufficiently confusing and had to stop for a few seconds to figure out that this wasn't what Zvi was saying. Like, he didn't believe Zvi believed it, but it nonetheless read like that for a minute.

(Either way, I don't care about it very much.)

Comment by benito on Bay Solstice 2019 Retrospective · 2020-01-18T01:53:00.554Z · score: 2 (1 votes) · LW · GW

I'll be the first to admit that Singularity is a better song than Five Thousand Years.

I agree it's more fun musically speaking, but the line about entropy in Five Thousand Years gets me every time.

Comment by benito on Realism about rationality · 2020-01-17T20:23:30.814Z · score: 2 (1 votes) · LW · GW

This is such an interesting use of a spoiler tags. I might try it myself sometime.

Comment by benito on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T20:19:01.341Z · score: 5 (3 votes) · LW · GW

(These last two comments were very helpful for me, thanks.)

Comment by benito on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T00:25:54.262Z · score: 25 (9 votes) · LW · GW

That's close.

Engaging with CFAR and LW's ideas about redesigning my mind and focusing on important goals for humanity (e.g. x-risk reduction), has primarily - not partially - majorly improved my general competence, and how meaningful my life is. I'm a much better person, more honest and true, because of it. It directly made my life better, not just my abstract beliefs about the future.

The difficulties above were transitional problems, not the main effects.

Comment by benito on A voting theory primer for rationalists · 2020-01-16T23:29:32.990Z · score: 2 (1 votes) · LW · GW

+1

Comment by benito on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T23:07:24.686Z · score: 16 (5 votes) · LW · GW

I see. I guess that framing feels slightly off to me - maybe this is what you meant or maybe we have a disagreement - but I would say "Helping people not have worse lives after interacting with <a weird but true idea>". 

Like I think that similar disorienting things would happen if someone really tried to incorporate PG's "Black Swan Farming" into your action space, and indeed many good startup founders have weird lives with weird tradeoffs relative to normal people that often leads to burnout. "Interacting with x-risk" or "Interacting with the heavy-tailed nature of reality" or "Interacting with AGI" or whatever. Oftentimes stuff humans have only been interacting with in the last 300 years, or in some cases 50 years.

Comment by benito on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T22:53:29.512Z · score: 25 (8 votes) · LW · GW

I find the structure of this post very clear, but I'm confused about which are the 'reality-masking' problems that you say you spent a while puzzling. You list three bullets in that section, let me rephrase them as problems.

  • How to not throw things out just because they seem absurd
  • How to update on bayesian evidence even if it isn't 'legible, socially approved evidence'
  • How to cause beliefs to propagate through one's model of the world

I guess this generally connects with my confusion around the ontology of the post. I think it would make sense for the post to be 'here are some problems where puzzling at them helped me understand reality' and 'here are some problems where puzzling at them caused me to hide parts of reality from myself', but you seem to think it's an attribute of the puzzle, not the way one approaches it, and I don't have a compelling sense of why you think that.

You give an example of teaching people math, and finding that you were training particular bad patterns of thought in yourself (and the students). That's valid, and I expect a widespread experience. I personally have done some math tutoring that I don't think had that property, due to background factors that affected how I approached it. In particular, I wasn't getting paid, my mum told me I had to do it (she's a private english teacher who also offers maths, but knows I grok maths better than her), and so I didn't have much incentive to achieve results. I mostly just spoke with kids about what they understood, drew diagrams, etc, and had a fun time. I wasn't too results-driven, mostly just having fun, and this effect didn't occur.

More generally, many problems will teach you bad things if you locally hill-climb or optimise in a very short-sighted way. I remember as a 14 year old, I read Thinking Physics, spent about 5 mins per question, and learned nothing from repeatedly just reading the answers. Nowadays I do Thinking Physics problems weekly, and I spend like 2-3 hours per question. This seems more like a fact about how I approached it than a fact about the thing itself.

Looking up at the three bullets I pointed to, all three of them are important things to get right, that most people could be doing better on. I can imagine healthy and unhealthy ways of approaching them, but I'm not sure what an 'unhealthy puzzle' looks like.

Comment by benito on Bay Solstice 2019 Retrospective · 2020-01-16T20:12:44.262Z · score: 19 (8 votes) · LW · GW

Thank you, mingyuan, Nat and Chelsea, for organising the Solstice. It's one of the most meaningful events I go to each year, that makes me feel like I care about the same things as so many other people I know.

As a second point, this retrospective is really detailed and I feel like I can get a lot of your knowledge from it, and I'm really glad something like this will be around for future solstice organisers to learn from.

Comment by benito on The Rocket Alignment Problem · 2020-01-16T19:59:57.122Z · score: 10 (2 votes) · LW · GW

Fair, but I expect I've also read those comments buried in random threads. Like, Nate said it here three years ago on the EA Forum.

The main case for [the problems we tackle in MIRI's agent foundations research] is that we expect them to help in a gestalt way with many different known failure modes (and, plausibly, unknown ones). E.g., 'developing a basic understanding of counterfactual reasoning improves our ability to understand the first AGI systems in a general way, and if we understand AGI better it's likelier we can build systems to address deception, edge instantiation, goal instability, and a number of other problems'.

I have a mental model of directly working on problems. But before Eliezer's post, I didn't have an alternative mental model to move probability mass toward. I just funnelled probability mass away from "MIRI is working on direct problems they foresee in AI systems" to "I don't understand why MIRI is doing what it's doing". Nowadays I have a clearer pointer to what technical research looks like when you're trying to get less confused and get better concepts.

This sounds weirdly dumb to say in retrospect, because 'get less confused and get better concepts' is one of the primary ways I think about trying to understand the world these days. I guess the general concepts have permeated a lot of LW/rationality discussion. But at the time I guess I had a concept shaped whole in my discussion of AI alignment research, and after reading this post I had a much clearer sense of that concept.

Comment by benito on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T19:51:42.853Z · score: 54 (18 votes) · LW · GW

I experienced a bunch of those disorientation patterns during my university years. For example:

  • I would only spend time with people who cared about x-risk as well, because other people seemed unimportant and dull, and I thought I wouldn't want to be close to them in the long run. I would choose to spend time with people even if I didn't connect with very much, hoping that opportunities to do useful things would show up (most of the time they didn't). And yet I wasn't able to hang out with these people. I went through maybe a 6 month period where when I met up with someone, the first thing I'd do was list out like 10-15 topics we could discuss, and try to figure out which were the most useful to talk about and in what order we should talk. I definitely also turned many of these people off hanging out with me because it was so taxing. I was confused about this at the time. I though I was not doing it well enough or something, because I wasn't providing enough value to them such that they were clearly having a good time.
  • I became very uninterested in talking with people whose words didn't cache out into a gears level model of the situation based in things I could confirm or understand. I went through a long period of not being able to talk to my mum about politics at all. She's very opinionated and has a lot of tribal feels and affiliations, and seemed to me to not be thinking about it in the way I wanted to think about it, which was a more first-principles fashion. Nowadays I find it interesting to put engage with how she sees the world, argue with it, feel what she feels. It's not the "truth" that I wanted, I can't take in the explicit content of her words and just input them into my beliefs, but this isn't the only way to learn from her. She has a valuable perspective on human coordination, that's tied up with important parts of her character and life story, that a lot of people share.
  • Relatedly, I went through a period of not being able to engage with aphorisms or short phrases that sounded insightful. Now I feel more trusting of my taste in what things mean and which things to take with me.
  • I generally wasn't able to connect with my family about what I cared about in life / in the big picture. I'd always try to be open and honest, and so I'd say something like "I think the world might end and I should do something about it" and they'd think that sounded mad and just ignore it. My Dad would talk about how he just cares that I'm happy. Nowadays I realise we have a lot of shared reference points for people who do things, not because they make you happy or because they help you be socially secure, but because they're right, because they're meaningful and fulfilling, and because it feels like it's your purpose. And they get that, and they know they make decisions like that, and they understand me when I talk about my decisions through that frame.
  • I remember on my 20th birthday, I had 10 of my friends round and gave a half-hour power-point presentation on my life plan. Their feedback wasn't that useful, but I realised like a week later, that the talk only contained info about how to evaluate whether a plan was good, and not how to generate plans to be evaluated. I'd just picked the one thing that people talked about that sounded okay under my evaluation process (publishing papers in ML, which was a terrible choice for me, I interacted very badly with academia). It took me a week to notice that I'd not said how to come up with plans. I then realised that I'd been thinking in a very narrow and evaluative way, and not been open to exploring interesting ideas before I could evaluate whether they worked.

I should say, these shifts have not been anything like an unmitigated failure, and I don't now believe were worth it just because they caused me to be more socially connected to x-risk things or because they were worth it in some pascal's mugging kind of way. Like, riffing off that last example, the birthday party was followed by us doing a bunch of other things I really liked - my friends and I read a bunch of dialogues from GEB after that (the voices people did were very funny) and ate cake, and I remember it fondly. The whole event was slightly outside my comfort zone, but everyone had a great time, and it was also in the general pattern of me trying to more explicitly optimise for what I cared about. A bunch of the stuff above has lead me to form the strongest friendships I had, much stronger than I think I expected I could have. And many other things I won't detail here.

Overall the effects on me personally, on my general fulfilment and happiness and connection to people I care about, has been strongly positive, and I'm glad about this. I take more small social risks, and they pay off bigger. I'm better at getting what I want, getting sh*t done, etc. Here, I'm mostly just listing some of the awkward things I did while at university.
 

Comment by benito on The Rocket Alignment Problem · 2020-01-16T17:32:29.644Z · score: 10 (2 votes) · LW · GW

Huh, I’m surprised to hear you say you already knew it. I did not know this already. This is the post where I properly understood that Eliezer et al are interested in decision theory and tiling agents and so on, not because they’re direct failures that they expect of future systems, but because they highlight confusions that are in want of basic theory to describe them, and that this basic theory will hopefully help make AGI alignable. Like I think I’d heard the words once or twice before then, but I didn’t really get it.

(Its important that Embedded Agenyou came out too, which was entirely framed around this “list of confusions in want of better concepts / basic theory” so I has some more concrete things to pin this to.)

Comment by benito on CFAR Participant Handbook now available to all · 2020-01-14T20:41:00.563Z · score: 2 (1 votes) · LW · GW

(Reminder that you can subscribe to a post to get notified of comments on that post.)

Comment by benito on Realism about rationality · 2020-01-13T00:45:56.897Z · score: 12 (3 votes) · LW · GW

Huh? A lot of these points about evolution register to me as straightforwardly false. Understanding the theory of evolution moved us from "Why are there all these weird living things? Why do they exist? What is going on?" to "Each part of these organisms has been designed by a local hill-climbing process to maximise reproduction." If I looked into it, I expect I'd find out that early medicine found it very helpful to understand how the system was built. This is like me handing you a massive amount of code that has a bunch of weird outputs and telling you to make it work better and more efficiently, and the same thing but where I tell you what company made the code, why they made it, and how they made it, and loads of examples of other pieces of code they made in this fashion.

If I knew how to operationalise it I would take a pretty strong bet that the theory of natural selection has been revolutionary in the history of medicine.

Comment by benito on The LessWrong 2018 Review · 2020-01-11T01:15:51.831Z · score: 4 (2 votes) · LW · GW

Hey, actually no, we're currently reviewing 2018's posts. We've waited a year in order to give everyone the power of hindsight to figure out what was actually good.

Btw, I think you might be the same user as user taryneast. That account is eligible to vote. I suggest logging in with that account, or contacting us via intercom (bottom right corner of the screen) if you'd like to reset your password.

Comment by benito on Voting Phase of 2018 LW Review · 2020-01-09T22:52:41.373Z · score: 6 (3 votes) · LW · GW

+1 I have voted on a number of posts that I've mostly skimmed, but not voted with much weight. 

(Quadratic voting makes the first few votes just very cheap, which was one part of my reasoning.)

Comment by benito on In Favor of Niceness, Community, and Civilization · 2020-01-09T19:03:26.348Z · score: 2 (1 votes) · LW · GW

Edit: Oops, misread the quote, ignore me and read cousin_it's comment.

Comment by benito on CFAR Participant Handbook now available to all · 2020-01-08T19:29:53.669Z · score: 8 (4 votes) · LW · GW

To be clear, I don't think you mean "This explains about 2/3rds of what CFAR learned about rationality". I think you mean "This is an artifact that records about 2/3rds of the concrete, teachable techniques that CFAR's understanding of rationality has output." (I think I’m right, but happy to be corrected if I’m wrong.)

Comment by benito on Voting Phase of 2018 LW Review · 2020-01-08T17:03:54.586Z · score: 9 (4 votes) · LW · GW

You don't have to vote on all posts, you don't have to have read all the posts. I think it's fine for the vote to correlate with what posts people actually read. The default vote on all is ‘neutral’, which equals zero quadratic votes. I voted on about half of the posts, and I think I read way more than most people.

Edit: Rewritten.

Comment by benito on AIRCS Workshop: How I failed to be recruited at MIRI. · 2020-01-08T06:44:21.981Z · score: 5 (3 votes) · LW · GW

This rule seems similar to the rule "do not show pictures of slaughterhouse to people who didn't decide by themselves to check how slaughterhouse are". On the one hand, it can be argued that if people knew how badly animals were treated, things would get better for them. It remains that, even if you believe that, showing slaughterhouse's picture to random people who were not prepared would be an extremely mean thing to do to them.
 

Huh. That’s a surprisingly interesting analogy. I will think more on it. Thx.

Comment by benito on Circling as Cousin to Rationality · 2020-01-07T20:42:14.907Z · score: 2 (1 votes) · LW · GW

The primary mechanism is 'understanding' boundaries instead of 'lowering' them, tho; like, often you end up in situations where you look at your boundaries and go "yep, that's definitely helpful and where it should be" or you notice the way that you had been forcing yourself to behave a particular way and that was self-harming because you were ignoring one of your own boundaries.

Yeah, this description matches things I like about circling. I've had experiences with people who in normal life would want things of me that I don't want to give them (e.g. types of social efforts and reassurances), and circling has given me space to practise not giving it to them when I endorse that, and introspecting in slow motion, understanding better and in more detail what both I and they are feeling (and I believe they're learning things about themselves too).

Comment by benito on AIRCS Workshop: How I failed to be recruited at MIRI. · 2020-01-07T02:36:36.156Z · score: 4 (2 votes) · LW · GW

Cool, all seems good (and happy to be corrected) :-)

Comment by benito on AIRCS Workshop: How I failed to be recruited at MIRI. · 2020-01-07T02:12:11.718Z · score: 23 (10 votes) · LW · GW

This was all really interesting, thanks for writing it and for being so open with your thinking, I think it's really valuable. Lots of this hiring process sounds very healthy - for instance, I'm glad to hear they pay you well for the hours you spend doing work trial projects.

As far as I understand it, plenty of people are panicked when they really understand what AI risks are. So Anna Salamon gave us a rule: We don't speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words "AI Safety"; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it's all too possible they can't really "be ready"...

As you can see, the staff really cared about us and wanted to be sure that we would be able to manage the thoughts related to AI risk.

Yeah, Anna talked in a bunch of detail about her thinking on this in this comment on the recent CFAR AMA, in case you're interested in seeing more examples of ways people can get confused when thinking about AI risk.

Every time a company decides not to hire me, I would love to know why, at least as to avoid making the same mistakes again. Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.

My take here is a bit different from yours. I think it's best to collaborate with interesting people who have unique ways of thinking, and it's more important to focus on that than "I will only hire people who agree with me". When I'm thinking about new people I've met and whether to hang out with them more / work with them more, I rarely am thinking about whether or not they also often use slogans like "x-risk" and "AI safety", but primarily how they think and whether I'd get value out of working with them / hanging out with them. 

The process you describe at CFAR sounds like a way to focus on finding interesting people: withholding judgement for as long as possible about whether you should work together, while giving you and them lots of space and time to build a connection. This lets you talk through the ideas that are interesting to both of you, and generally understand each other better than a 2-hour interview or coding project offers. 

Ed Kmett seems like a central example here; my (perhaps mistaken) belief is that he's primarily doing a lot of non-MIRI work he finds valuable, and is inspired by that more than the other kinds of research MIRI is doing, but he and other researchers at MIRI find ways to collaborate, and when they do he does really cool stuff. I expect there was a period where he engaged with the ideas around AI alignment in a lot of detail, and has opinions about them, and of course at some point that was important to whether he wanted to work at MIRI, but I expect Nate and others would be very excited about him being around, regardless of whether he thought their project was terribly important, given his broader expertise and ways of thinking about functional programming. I think it's great to spend time with interesting people finding out more about their interests and how they think, and that this stuff is more valuable than taking a group of people where the main thing you know is that they passed the coding part of the interview, and primarily spending time persuading them that your research is important.

Even given that, I'm sorry you had a stressful/awkward time trying to pretend this was casual and not directly important for you for financial and employment reasons. It's a dynamic I've experience not infrequently within the Bay Area rationalist gatherings (and EA gatherings globally), of being at large social events and trying to ignore how much they can affect e.g. your hiring prospects (I don't think MIRI is different in this regard from spaces involving other orgs like 80k, OpenPhil, etc). I'll add that, as above, I do indeed think that MIRI staff had not themselves made a judgment at the time of inviting you to the workshop. Also, I'll mention my sense is that it was affecting you somewhat more than it does for the median person at such events, who I think mostly have an overall strongly positive experience and an okay time ignoring this part of it. Still, I think it's a widespread effect, and I'm honestly not sure what to do about it.

Comment by benito on 2019 AI Alignment Literature Review and Charity Comparison · 2020-01-06T21:46:47.584Z · score: 2 (1 votes) · LW · GW

I also thought so. I wondered maybe if Larks is describing that MacAskill incorporated Demski's comments-on-a-draft into the post.

Comment by benito on Benito's Shortform Feed · 2020-01-01T16:54:13.490Z · score: 17 (6 votes) · LW · GW

There's a game for the Oculus Quest (that you can also buy on Steam) called "Keep Talking And Nobody Explodes".

It's a two-player game. When playing with the VR headset, one of you wears the headset and has to defuse bombs in a limited amount of time (either 3, 4 or 5 mins), while the other person sits outside the headset with the bomb-defusal manual and tells you what to do. Whereas with other collaboration games, you're all looking at the screen together, with this game the substrate of communication is solely conversation, the other person is providing all of your inputs about how their half is going (i.e. not shown on a screen).

The types of puzzles are fairly straightforward computational problems but with lots of fiddly instructions, and require the outer person to figure out what information they need from the inner person. It often involves things like counting numbers of wires of a certain colour, or remembering the previous digits that were being shown, or quickly describing symbols that are not any known letter or shape.

So the game trains you and a partner in efficiently building a shared language for dealing with new problems.

More than that, as the game gets harder, often some of the puzzles require substantial independent computation from the player on the outside. At this point, it can make sense to play with more than two people, and start practising methods for assigning computational work between the outer people (e.g. one of them works on defusing the first part of the bomb, and while they're computing in their head for ~40 seconds, the other works on defusing the second part of the bomb in dialogue with the person on the inside). This further creates a system which trains the ability to efficiently coordinate on informational work under.

Overall I think it's a pretty great game for learning and practising a number of high-pressure communication skills with people you're close to.

 

Comment by benito on Perfect Competition · 2019-12-30T12:22:38.405Z · score: 2 (1 votes) · LW · GW

(Edited. Thx for letting us know.)

Comment by benito on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2019-12-28T06:10:43.439Z · score: 10 (2 votes) · LW · GW

This is personally quite helpful, thanks for posting it.

Comment by benito on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2019-12-27T12:38:34.150Z · score: 2 (1 votes) · LW · GW

I don’t really stand by the last half of the points above, I.e. the last ~3rd of the longer review. I think there’s something important to say here about the relationship between common knowledge and deontology, but that I didn’t really say it and I said something else instead. I hope to get the time to try again to say it.

Comment by benito on How’s that Epistemic Spot Check Project Coming? · 2019-12-26T21:32:28.954Z · score: 12 (6 votes) · LW · GW

"No Gods, No Proxies, Just Digging For Truth" is a good tagline for your blog.