Rationalist Town Hall: Pandemic Edition 2020-10-21T23:54:03.528Z · score: 47 (11 votes)
Sunday October 25, 12:00PM (PT) — Scott Garrabrant on "Cartesian Frames" 2020-10-21T03:27:12.739Z · score: 39 (10 votes)
Sunday October 18, 12:00PM (PT) — Garden Party 2020-10-17T19:36:52.829Z · score: 34 (11 votes)
Have the lockdowns been worth it? 2020-10-12T23:35:14.835Z · score: 69 (28 votes)
Fermi Challenge: Trains and Air Cargo 2020-10-05T21:51:45.281Z · score: 28 (7 votes)
Postmortem to Petrov Day, 2020 2020-10-03T21:30:56.491Z · score: 68 (42 votes)
Open & Welcome Thread – October 2020 2020-10-01T19:06:45.928Z · score: 14 (6 votes)
What are good rationality exercises? 2020-09-27T21:25:24.574Z · score: 49 (12 votes)
Honoring Petrov Day on LessWrong, in 2020 2020-09-26T08:01:36.838Z · score: 119 (44 votes)
Sunday August 23rd, 12pm (PDT) – Double Crux with Buck Shlegeris and Oliver Habryka on Slow vs. Fast AI Takeoff 2020-08-22T06:37:07.173Z · score: 34 (8 votes)
Forecasting Thread: AI Timelines 2020-08-22T02:33:09.431Z · score: 112 (49 votes)
[Oops, there is actually an event] Notice: No LW event this weekend 2020-08-22T01:26:31.820Z · score: 11 (2 votes)
Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) 2020-08-20T00:49:49.639Z · score: 49 (14 votes)
Survey Results: 10 Fun Questions for LWers 2020-08-19T06:10:55.386Z · score: 42 (16 votes)
10 Fun Questions for LessWrongers 2020-08-18T03:28:05.276Z · score: 47 (16 votes)
Sunday August 16, 12pm (PDT) — talks by Ozzie Gooen, habryka, Ben Pace 2020-08-14T18:32:35.378Z · score: 28 (5 votes)
Is Wirecutter still good? 2020-08-07T21:54:06.141Z · score: 46 (19 votes)
Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby 2020-08-06T22:50:21.550Z · score: 32 (6 votes)
Sunday August 2, 12pm (PDT) — talks by jimrandomh, johnswenthworth, Daniel Filan, Jacobian 2020-07-30T23:55:44.712Z · score: 17 (3 votes)
What Failure Looks Like: Distilling the Discussion 2020-07-29T21:49:17.255Z · score: 67 (16 votes)
"Should Blackmail Be Legal" Hanson/Zvi Debate (Sun July 26th, 3pm PDT) 2020-07-20T04:06:26.275Z · score: 37 (11 votes)
Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn 2020-07-16T20:04:37.974Z · score: 26 (5 votes)
Sunday July 12 — talks by Scott Garrabrant, Alexflint, alexei, Stuart_Armstrong 2020-07-08T00:27:57.876Z · score: 19 (4 votes)
The silence is deafening – Devon Zuegel 2020-07-04T02:30:59.409Z · score: 28 (9 votes)
Inviting Curated Authors to Give 5-Min Online Talks 2020-07-01T01:05:39.794Z · score: 27 (6 votes)
Radical Probabilism [Transcript] 2020-06-26T22:14:13.523Z · score: 47 (14 votes)
Sunday June 28 – talks by johnswentworth, Daniel kokotajlo, Charlie Steiner, TurnTrout 2020-06-26T19:13:23.754Z · score: 26 (5 votes) - A Petition 2020-06-25T05:44:50.050Z · score: 118 (41 votes)
Prediction = Compression [Transcript] 2020-06-22T23:54:22.170Z · score: 60 (17 votes)
Online Curated LessWrong Talks 2020-06-19T02:16:14.824Z · score: 16 (2 votes)
Sunday June 21st – talks by Abram Demski, alkjash, orthonormal, eukaryote, Vaniver 2020-06-18T20:10:38.978Z · score: 50 (13 votes)
Superexponential Historic Growth, by David Roodman 2020-06-15T21:49:00.188Z · score: 43 (14 votes)
The one where Quirrell is an egg 2020-04-15T06:02:36.337Z · score: 17 (7 votes)
Coronavirus: Justified Key Insights Thread 2020-04-13T22:40:03.104Z · score: 54 (15 votes)
Hanson & Mowshowitz Debate: COVID-19 Variolation 2020-04-08T00:07:28.315Z · score: 39 (11 votes)
April Fools: Announcing LessWrong 3.0 – Now in VR! 2020-04-01T08:00:15.199Z · score: 93 (33 votes)
Small Comment on Organisational Disclaimers 2020-03-29T17:07:48.339Z · score: 30 (14 votes)
[Update: New URL] Today's Online Meetup: We're Using Mozilla Hubs 2020-03-29T04:00:18.228Z · score: 41 (9 votes)
March 25: Daily Coronavirus Updates 2020-03-27T04:32:18.530Z · score: 11 (2 votes)
Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th) 2020-03-26T23:46:08.932Z · score: 43 (12 votes)
March 24th: Daily Coronavirus Link Updates 2020-03-26T02:22:35.214Z · score: 9 (1 votes)
Announcement: LessWrong Coronavirus Links Database 2.0 2020-03-24T22:07:29.162Z · score: 29 (7 votes)
How to Contribute to the Coronavirus Response on LessWrong 2020-03-24T22:04:30.956Z · score: 38 (9 votes)
Against Dog Ownership 2020-03-23T09:17:41.438Z · score: 44 (28 votes)
March 21st: Daily Coronavirus Links 2020-03-23T00:43:29.913Z · score: 10 (2 votes)
March 19th: Daily Coronavirus Links 2020-03-21T00:00:54.173Z · score: 19 (4 votes)
March 18th: Daily Coronavirus Links 2020-03-19T22:20:27.217Z · score: 13 (4 votes)
March 17th: Daily Coronavirus Links 2020-03-18T20:55:45.372Z · score: 12 (3 votes)
March 16th: Daily Coronavirus Links 2020-03-18T00:00:33.273Z · score: 15 (2 votes)
LessWrong Coronavirus Link Database 2020-03-13T23:39:32.544Z · score: 75 (17 votes)


Comment by benito on Desperately looking for the right person to discuss an alignment related idea with. (and some general thoughts for others with similar problems) · 2020-10-24T06:19:41.057Z · score: 10 (2 votes) · LW · GW

PM'd, to chat about it more.

Comment by benito on The Darwin Game - Rounds 0 to 10 · 2020-10-24T03:47:22.139Z · score: 13 (6 votes) · LW · GW

This. Is. So. Much. Fun.

Comment by benito on Introduction to Cartesian Frames · 2020-10-24T00:12:36.722Z · score: 8 (4 votes) · LW · GW


I'm exceedingly excited about this sequence. The Embedded Agency sequence laid out a core set of confusions, and it seems like this is a formal system that deals with those issues far better than the current alternatives e.g. the cybernetics model. This post lays out the basics of Cartesian Frames clearly and communicates key parts of the overall approach ("reasoning like Pearl's to objects like game theory's, with a motivation like Hutter's"). I've also never seen math explained with as much helpful philosophical justification (e.g. "Part of the point of the Cartesian frame framework is that we are not privileging either interpretation"), and I appreciate all of that quite a bit.

It seems likely that by the end of this sequence it will be on a list of my all-time favorite things posted to LessWrong 2.0. I'm looking forward to getting to grips with Cartesian Frames, understanding how they work, and to start applying those intuitions to my other discussions of agency.

I'm also curating it a little quickly to let people know that Scott is giving a talk on this sequence this Sunday at 12:00PM PT. Furthermore, Scott is holding weekly office hours (see the same link for more info) for people to ask questions, and Diffractor is running a reading group in the MIRIx Discord, which I recommend people PM him to get an invite to (I just did so myself, it's a nice Discord server).

Comment by benito on The bads of ads · 2020-10-23T08:36:50.453Z · score: 2 (1 votes) · LW · GW

I want to raise that I am rarely in the London Underground, and when I am I do find the ads kind of fantastical and exciting. Of course, perhaps I'm buying into a fraud.

I do think that architecture and design of major city centers has the potential to be a lot more unique if they don't have to be just loads of massive screens.

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T21:31:55.143Z · score: 2 (1 votes) · LW · GW

Makes sense! Will make an extra effort to cause notes to get taken then.

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T21:25:20.065Z · score: 3 (2 votes) · LW · GW

That's all awesome to hear :)

Comment by benito on Mark Xu's Shortform · 2020-10-22T19:46:04.660Z · score: 4 (2 votes) · LW · GW

I like this one.

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T18:24:37.228Z · score: 2 (1 votes) · LW · GW

Moved to answers, seem like a fine question to ask.

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T18:07:10.450Z · score: 2 (1 votes) · LW · GW


I think this time I will probably not record it, while we're getting used to it all, because on the margin people don't feel comfortable being videoed. But probably we'll make some notes in a google doc during it that can be shared.

Out of interest, can you not make it because of time zone or because you're generally busy Sundays? 12-2 PT is the time I always pick when I want something to work internationally, so am interested to know why people can't make it.

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T02:32:02.824Z · score: 4 (2 votes) · LW · GW

Also several other LW posts I found personally useful, including:

There's more at the Coronavirus tag page.

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T02:24:23.652Z · score: 19 (8 votes) · LW · GW

I'd like to spend a little time acknowledging good projects executed and work done by rationalists in response to Covid. Here are some that come to mind, but it's definitely not all of them, can people help me add to the list?

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T00:25:15.798Z · score: 2 (1 votes) · LW · GW

Moved to answer, is fine to make this request as an answer for people to vote on interest (you don't have to talk about this, it most just increases the likelihood Zvi will talk about this).

Comment by benito on Rationalist Town Hall: Pandemic Edition · 2020-10-22T00:24:03.114Z · score: 2 (1 votes) · LW · GW

Thanks, was an old FB link.

Comment by benito on When was the term "AI alignment" coined? · 2020-10-21T23:42:31.760Z · score: 2 (1 votes) · LW · GW

Would be interested in a link if anyone is willing to go look for it.

Comment by benito on Mark Xu's Shortform · 2020-10-21T20:03:14.866Z · score: 2 (1 votes) · LW · GW

It's a natural way to cut it up from one's own experience. Each platform has different affordances and brings out different aspects of people, and I get pretty different experiences of them on the different platforms mentioned.

Comment by benito on Mark Xu's Shortform · 2020-10-21T20:02:08.839Z · score: 2 (1 votes) · LW · GW

I am a google doc rationalist! (Or I would like to be. Google docs are great.)

Comment by benito on When was the term "AI alignment" coined? · 2020-10-21T18:29:49.695Z · score: 6 (3 votes) · LW · GW

I recall Eliezer saying that Stuart Russell named the 'value alignment problem', and that it was derived from that. (Perhaps Eliezer derived it?)

Comment by benito on A tale from Communist China · 2020-10-21T18:19:31.352Z · score: 3 (2 votes) · LW · GW

This is a really great comment, thanks. (Consider making it into a post.)

Comment by benito on The Treacherous Path to Rationality · 2020-10-19T22:22:38.322Z · score: 2 (1 votes) · LW · GW

It's a good question. I'll see if I can write a reply in the next few days...

Comment by benito on The Treacherous Path to Rationality · 2020-10-19T22:03:05.471Z · score: 2 (1 votes) · LW · GW

(I know of 1-2 other examples where people did something like double their net wealth.)

Comment by benito on What are some beautiful, rationalist artworks? · 2020-10-18T23:55:20.094Z · score: 4 (2 votes) · LW · GW

What is happening in that photo?

Comment by benito on What are some beautiful, rationalist artworks? · 2020-10-18T06:15:08.216Z · score: 2 (1 votes) · LW · GW

I really like this one, thanks.

Comment by benito on The Darwin Game · 2020-10-18T02:15:29.732Z · score: 13 (6 votes) · LW · GW

Thanks for obeying the norms. That we definitely have. Around time travel technology.

Comment by benito on Open & Welcome Thread – October 2020 · 2020-10-17T23:35:44.527Z · score: 5 (3 votes) · LW · GW

Another (mild) norm proposal: I am against comments that do a line-by-line reply to the comment it's replying to. 

I think it reliably makes a conversation very high effort and in-the-weeds, to the cost of talking about big picture disagreements. It often means there's no part of the comment which communicates directly, saying "this is my response and where I think our overarching disagreement lies", it just has lots of small pieces. 

This is similar to my open thread post about google docs which was about how inline commenting seems to disincentivize big-picture responses. 

It's fine to drop threads in conversations, not everything needs to be addressed, the big picture is more important in most situations. Writing a flowing paragraph is often much better conversationally than loads of one-line replied to one-lines.

Comment by benito on Have the lockdowns been worth it? · 2020-10-16T21:47:18.749Z · score: 2 (1 votes) · LW · GW

Moved to comments for taking a position on the overall question.

Comment by benito on The Solomonoff Prior is Malign · 2020-10-16T05:25:27.581Z · score: 8 (5 votes) · LW · GW

+1 I already said I liked it, but this post is great and will immediately be the standard resource on this topic. Thank you so much.

Comment by benito on Babble challenge: 50 ways of hiding Einstein's pen for fifty years · 2020-10-15T23:59:07.483Z · score: 3 (2 votes) · LW · GW

"Come up with 10 good ideas for achieving X" is the first one that comes to mind. I also like your one at the end quite a bit.

Comment by benito on Has Eliezer ever retracted his statements about weight loss? · 2020-10-15T23:52:09.925Z · score: 2 (1 votes) · LW · GW

If you're in quote text, hit enter twice to leave quote text.

If you highlight text, you'll see the menu with all the options, including the option to toggle whether the highlighted text is in quotes.

The only way the editor submits a comment is if you hit cmd-enter or ctrl-enter, for which the sole function is to submit a comment (don't try pressing it for anything else).

Comment by benito on jacobjacob's Shortform Feed · 2020-10-15T23:03:44.790Z · score: 2 (1 votes) · LW · GW

Take my upvote.

Comment by benito on The Solomonoff Prior is Malign · 2020-10-15T18:21:30.578Z · score: 4 (3 votes) · LW · GW

Such a great post.

Note that I changed the formatting of your headers a bit, to make some of them just bold text. They still appear in the ToC just fine. Let me know if you'd like me to revert it or have any other issues.

Comment by benito on Open & Welcome Thread – October 2020 · 2020-10-15T18:01:50.734Z · score: 3 (2 votes) · LW · GW

That's pretty scary.

I expect I have much more flexibility than your family did – I have no dependents, I have no property / few belongings to tie me down, and I expect flight travel is much more readily available to me in the present-day. I also expect to notice it faster than the supermajority of people (not disanalogous to how I was prepped for Covid like a month before everyone else).

Comment by benito on Open & Welcome Thread – October 2020 · 2020-10-15T16:12:10.905Z · score: 5 (3 votes) · LW · GW

I do expect to be able to vacate a given country in a timely manner if it seems to be falling into a cultural Revolution.

Comment by benito on What are your greatest one-shot life improvements? · 2020-10-15T07:21:10.431Z · score: 2 (1 votes) · LW · GW

(I, too, now have pretty effortless inbox zero, as opposed to my previous inbox ten thousand.)

Comment by benito on The Treacherous Path to Rationality · 2020-10-14T18:55:08.426Z · score: 21 (8 votes) · LW · GW

"Everyone should occasionally sell some food for status" is not what's being discussed. Your phrasing sounds as though Said said everyone was supposed to bring cookies or something, which is obviously not what he said.

What's being discussed is more like "people should be rewarded for making small but costly contributions to the group". Cookies in-and-of-themselves aren't contributing directly to the group members becoming stronger rationalists, but (as well as just being a kind gift) it's a signal that someone is saying "I like this group, and I'm willing to invest basic resources into improving it". 

If such small signals are ignored, it is reasonable to update that people aren't tracking contributions very much, and decide that it's not worth putting in more of your time and effort.

Comment by benito on The Treacherous Path to Rationality · 2020-10-14T18:47:44.422Z · score: 2 (3 votes) · LW · GW

Something about that seems plausible to me. I'll think on it more...

Comment by benito on Have the lockdowns been worth it? · 2020-10-14T04:34:13.573Z · score: 4 (3 votes) · LW · GW

(Just a note that this overall seems fairly fine as a comment and not an answer, which you did. Defying the rules in the comments isn't generally good, but I did appreciate reading this comment, it did help me think a bit more clearly about how the lockdown affects families.)

Also, I'm sorry you don't get to see your dad.

Comment by benito on The Treacherous Path to Rationality · 2020-10-14T02:16:46.781Z · score: 8 (5 votes) · LW · GW

In as much as my comment matters here, I'm sorry about that Said :/

Comment by benito on Babble challenge: 50 ways to escape a locked room · 2020-10-13T23:37:06.780Z · score: 5 (3 votes) · LW · GW

This comment is really weird out of context in recent discussion.

Comment by benito on The Treacherous Path to Rationality · 2020-10-13T22:43:41.573Z · score: 11 (4 votes) · LW · GW

Good on your spouse! Very impressed. 

(Also, I don't get the S&P being up so much, am generally pretty confused by that, and updated further that I don't know how to get information out of the stock market.)

I think epistemics is indeed the first metric I care about for LessWrongers. If we had ignored covid or been confident it was not a big deal, I would now feel pretty doomy about us, but I do think we did indeed do quite well on it. I could talk about how we discussed masks, precautions, microcovids, long-lasting respiratory issues, and so on, but I don't feel like going on at length about it right now. Thanks for saying what you said there.

Now, I don't think you/others should update on this a ton, and perhaps we can do a survey to check, but my suspicion is that LWers and Rationalists have gotten covid way, way less than the baseline. Like, maybe an order of magnitude less. I know family who got it, I know whole other communities who got it, but I know hundreds of rationalists and I know so few cases among them.

Of my extended circle of rationalist friends, I know of one person who got it, and this was due to them living in a different community with different epistemic standards, and I think my friend fairly viscerally lost some trust in that community for not taking the issue seriously early on. But otherwise, I just know somewhere between 100-200 people who didn't get it (a bunch of people who were in NY like Jacob, Zvi, etc), people who did basic microcovid calculations, started working from home as soon as the first case of community-transmission was reported in their area, had stockpiled food in February, updated later on that surface-transmission was not a big deal so stopped washing their deliveries, etcetera and so forth.

I also knew a number of people who in February were doing fairly serious research trying to figure out the risk factors for their family, putting out bounties for others to help read the research, and so on, and who made a serious effort to get their family safe.

There have been some private threads in my rationalist social circles where we've said "Have people personally caught the virus in this social circle despite taking serious quarantine precautions?" and there've been several reports of "I know a friend from school who got it" or "I know a family member who got it", and there's been one or two "I got a virus in February before quarantining but the symptoms don't match", but overall I just know almost no people who got it, and a lot of people taking quarantine precautions before it was cool. I also know several people who managed to get tests and took them (#SecurityMindset), and who came up negative, as expected.

One of the main reasons I'm not very confident is that I think it's somewhat badly incentivized for people to report that they personally got it. While it's positive for the common good, and it lets us know about community rates and so on, I think people expect they will be judged a non-zero amount for getting it, and can also trick themselves with plausible deniability because testing is bad ("Probably it was just some other virus, I don't know"). So there's likely some amount of underreporting, correlated with the people who didn't take it seriously in the first place. (If this weren't an issue, I would had said more like 500-1500 of my extended friends and acquaintances.)

And, even if it's true, I have concerns that we acted with appropriate caution in the first few months, but then when more evidence came in and certain things turned out to be unnecessary (e.g. cleaning deliveries, reheating delivery food, etc) I think people stuck those out much too long and some maybe still are.

Nonetheless, my current belief is that rationality did help me and a lot of my 100s of rationalist friends and acquaintances straightforwardly avoid several weeks and months of life lost in expectation, just by doing some basic fermi estimates about the trajectory and consequences of the coronavirus, and reading/writing their info on LessWrong. If you want you and your family to be safe from weird things like this in the future, I think that practicing rationality (and being on LessWrong) is a pretty good way to do this.

(Naturally, being married to an epidemiologist is another good way, but I can only have one spouse, and there are lots of weird problems heading our way from other areas too. Oh for the world where the only problem facing us was pandemics.)

(Also thx, I think I have fixed the links.)

Added: I didn't see your reply to Jacobian before writing this. Feel free to refer me to parts of that.

Comment by benito on Have the lockdowns been worth it? · 2020-10-13T01:17:41.312Z · score: 3 (2 votes) · LW · GW

Thx, have removed Italy.

Comment by benito on The Treacherous Path to Rationality · 2020-10-12T11:29:37.061Z · score: 13 (8 votes) · LW · GW

The best startup people were similarly early, and I respect them a lot for that. If you know of another community or person that publicly said the straightforward and true things in public back in February, I am interested to know who they are and what other surprising claims they make.

I do know a lot of rationalists who put together solid projects and have done some fairly useful things in response to the pandemic – like and, and Zvi's and Sarah C's writing, and the LW covid links database, and I heard that Median group did a bunch of useful things, and so on. Your comment makes me think I should make a full list somewhere to highlight the work they've all done, even if they weren't successful.

I wouldn't myself say we've pwned covid, I'd say some longer and more complicated thing by default that points to our many flaws while highlighting our strengths. I do think our collective epistemic process was superior to that of most other communities, in that we spoke about it plainly (simulacra level 1) in public in January/February, and many of us worked on relevant projects.

Comment by benito on Open & Welcome Thread – October 2020 · 2020-10-12T03:37:26.134Z · score: 3 (2 votes) · LW · GW

Highlight the text where you want the link to be, and the editor menu should appear. Then click the link icon (looks like a rotated oval with a straight line in the centre), and enter the link.

Comment by benito on AGI safety from first principles: Introduction · 2020-10-12T02:49:50.580Z · score: 6 (3 votes) · LW · GW

Oli suggests that there are no fields with three-word-names, and so "AI Existential Risk" is not a choice. I think "AI Alignment" is the currently most accurate name for the field that encompasses work like Paul's and Vanessa's and Scott/Abram's and so on. I think "AI Alignment From First Principles" is probably a good name for the sequence.

Comment by benito on Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ · 2020-10-11T21:59:55.914Z · score: 5 (3 votes) · LW · GW

(I also subscribe to his newsletter and was surprised positively by the quality of the writing.)

Comment by benito on The Treacherous Path to Rationality · 2020-10-11T21:51:33.720Z · score: 3 (2 votes) · LW · GW

I think the thing I want here is a better analysis of the tradeoff and when to take it (according to one's inside view), rather than something like an outside view account that says "probably don't". 

(And you are indeed contributing to understanding that tradeoff, your first comment indeed gives two major reasons, but it still feels to me true to say about many people in history and not just people today.)

Suppose we plot "All people alive" on the x-axis, and "Probability you should do rationality on your inside view" on the y-axis. Here are two opinions one could have about people during the time of Bacon.

"Some people should do rationality, and most people shouldn't."
"Some people should not think about it, and some people should consider it regularly and sometimes do it."

I want to express something more like the second one than the first.

Comment by benito on AGI safety from first principles: Introduction · 2020-10-11T21:14:50.435Z · score: 2 (1 votes) · LW · GW

It seems a definite improvement on the axis of specificity, I do prefer it over the status quo for that reason.

But it doesn't address the problem of scope-sensitivity. I don't think this sequence is about preventing medium-sized failures from AGI. It's about preventing extinction-level risks to our future.

"A First-Principles Explanation of the Extinction-Level Threat of AGI: Introduction"

"The AGI Extinction Threat from First Principles: Introduction"

"AGI Extinction From First Principles: Introduction"

Comment by benito on Open & Welcome Thread – October 2020 · 2020-10-11T05:30:58.036Z · score: 2 (1 votes) · LW · GW

There is information that's dangerous to share. Private data, like your passwords. Information that can be used for damage, like how to build an atom bomb or smallpox. And there will be more ideas that are damaging in the future.

(That said I don't expect your idea is one of these.)

Comment by benito on Open & Welcome Thread – October 2020 · 2020-10-11T02:34:05.499Z · score: 3 (2 votes) · LW · GW

Welcome! Your story is pretty cool to hear. Look forward to seeing you around more. By the way, I like your comments, and thought they were all positive contributions (I had upvoted one of them) :)

Comment by benito on The Treacherous Path to Rationality · 2020-10-10T22:03:27.404Z · score: 6 (5 votes) · LW · GW

This isn't much of an update to me. It's like if you told me that a hacker broke out of the simulation, and I responded that it isn't that surprising they did because they went to Harvard. The fact that someone did it all is the primary and massive update that it was feasible and that this level of win was attainable for humans at that time if they were smart and determined.

Comment by benito on The Treacherous Path to Rationality · 2020-10-10T20:08:22.581Z · score: 7 (5 votes) · LW · GW

Upvoted, it's also correct to ask whether taking this route is 'worth it'.

I am skeptical of "Moreover, it seems likely that for most people, during most of history, this strategy was the right choice." Remember that half of all humans existed after 1309. In 1561 Francis Bacon was born, who invented the founding philosophy and infrastructure of science. So already it was incredibly valuable to restructure your mind to track reality and take directed global-scale long-term action. 

And plausibly it was so before then as well. I remember being surprised reading Vaniver's account of Xunzi's in 300 BC, where Vaniver said:

By the end of it, I was asking myself, "if they had this much of rationality figured out back then, why didn't they conquer the world?" Then I looked into the history a bit more and figured out that two of Xunzi's students were core figures in Qin Shi Huang's unification of China to become the First Emperor.