I'm exceedingly excited about this sequence. The Embedded Agency sequence laid out a core set of confusions, and it seems like this is a formal system that deals with those issues far better than the current alternatives e.g. the cybernetics model. This post lays out the basics of Cartesian Frames clearly and communicates key parts of the overall approach ("reasoning like Pearl's to objects like game theory's, with a motivation like Hutter's"). I've also never seen math explained with as much helpful philosophical justification (e.g. "Part of the point of the Cartesian frame framework is that we are not privileging either interpretation"), and I appreciate all of that quite a bit.
It seems likely that by the end of this sequence it will be on a list of my all-time favorite things posted to LessWrong 2.0. I'm looking forward to getting to grips with Cartesian Frames, understanding how they work, and to start applying those intuitions to my other discussions of agency.
I'm also curating it a little quickly to let people know that Scott is giving a talk on this sequence this Sunday at 12:00PM PT. Furthermore, Scott is holding weekly office hours (see the same link for more info) for people to ask questions, and Diffractor is running a reading group in the MIRIx Discord, which I recommend people PM him to get an invite to (I just did so myself, it's a nice Discord server).
I think this time I will probably not record it, while we're getting used to it all, because on the margin people don't feel comfortable being videoed. But probably we'll make some notes in a google doc during it that can be shared.
Out of interest, can you not make it because of time zone or because you're generally busy Sundays? 12-2 PT is the time I always pick when I want something to work internationally, so am interested to know why people can't make it.
I'd like to spend a little time acknowledging good projects executed and work done by rationalists in response to Covid. Here are some that come to mind, but it's definitely not all of them, can people help me add to the list?
Epidemic Forecasting, a project that amongst other things took superforecasters and used them to answer decision-relevant questions for various governments and vaccine trials. This was led by many LessWrongers including Jacob Lagerros, Josh Jacobson, Jan Kulveit, Connor Flexman, and more. My sense is that this project was unsuccessful in causing major change (e.g. preventing 10k deaths), but was nonetheless a strongly worthwhile effort, and the relevant people learned a lot about the world and its inadequacies.
MicroCovid.org, a project lead by LessWronger Catherio that built the best microcovid calculator I know and has been used by many rationalists in their group coordination and personal assessment. (She wrote about it on LW here.)
It's a natural way to cut it up from one's own experience. Each platform has different affordances and brings out different aspects of people, and I get pretty different experiences of them on the different platforms mentioned.
Another (mild) norm proposal: I am against comments that do a line-by-line reply to the comment it's replying to.
I think it reliably makes a conversation very high effort and in-the-weeds, to the cost of talking about big picture disagreements. It often means there's no part of the comment which communicates directly, saying "this is my response and where I think our overarching disagreement lies", it just has lots of small pieces.
This is similar to my open thread post about google docs which was about how inline commenting seems to disincentivize big-picture responses.
It's fine to drop threads in conversations, not everything needs to be addressed, the big picture is more important in most situations. Writing a flowing paragraph is often much better conversationally than loads of one-line replied to one-lines.
Note that I changed the formatting of your headers a bit, to make some of them just bold text. They still appear in the ToC just fine. Let me know if you'd like me to revert it or have any other issues.
I expect I have much more flexibility than your family did – I have no dependents, I have no property / few belongings to tie me down, and I expect flight travel is much more readily available to me in the present-day. I also expect to notice it faster than the supermajority of people (not disanalogous to how I was prepped for Covid like a month before everyone else).
"Everyone should occasionally sell some food for status" is not what's being discussed. Your phrasing sounds as though Said said everyone was supposed to bring cookies or something, which is obviously not what he said.
What's being discussed is more like "people should be rewarded for making small but costly contributions to the group". Cookies in-and-of-themselves aren't contributing directly to the group members becoming stronger rationalists, but (as well as just being a kind gift) it's a signal that someone is saying "I like this group, and I'm willing to invest basic resources into improving it".
If such small signals are ignored, it is reasonable to update that people aren't tracking contributions very much, and decide that it's not worth putting in more of your time and effort.
(Just a note that this overall seems fairly fine as a comment and not an answer, which you did. Defying the rules in the comments isn't generally good, but I did appreciate reading this comment, it did help me think a bit more clearly about how the lockdown affects families.)
(Also, I don't get the S&P being up so much, am generally pretty confused by that, and updated further that I don't know how to get information out of the stock market.)
I think epistemics is indeed the first metric I care about for LessWrongers. If we had ignored covid or been confident it was not a big deal, I would now feel pretty doomy about us, but I do think we did indeed do quite well on it. I could talk about how we discussed masks, precautions, microcovids, long-lasting respiratory issues, and so on, but I don't feel like going on at length about it right now. Thanks for saying what you said there.
Now, I don't think you/others should update on this a ton, and perhaps we can do a survey to check, but my suspicion is that LWers and Rationalists have gotten covid way, way less than the baseline. Like, maybe an order of magnitude less. I know family who got it, I know whole other communities who got it, but I know hundreds of rationalists and I know so few cases among them.
Of my extended circle of rationalist friends, I know of one person who got it, and this was due to them living in a different community with different epistemic standards, and I think my friend fairly viscerally lost some trust in that community for not taking the issue seriously early on. But otherwise, I just know somewhere between 100-200 people who didn't get it (a bunch of people who were in NY like Jacob, Zvi, etc), people who did basic microcovid calculations, started working from home as soon as the first case of community-transmission was reported in their area, had stockpiled food in February, updated later on that surface-transmission was not a big deal so stopped washing their deliveries, etcetera and so forth.
I also knew a number of people who in February were doing fairly serious research trying to figure out the risk factors for their family, putting out bounties for others to help read the research, and so on, and who made a serious effort to get their family safe.
There have been some private threads in my rationalist social circles where we've said "Have people personally caught the virus in this social circle despite taking serious quarantine precautions?" and there've been several reports of "I know a friend from school who got it" or "I know a family member who got it", and there's been one or two "I got a virus in February before quarantining but the symptoms don't match", but overall I just know almost no people who got it, and a lot of people taking quarantine precautions before it was cool. I also know several people who managed to get tests and took them (#SecurityMindset), and who came up negative, as expected.
One of the main reasons I'm not very confident is that I think it's somewhat badly incentivized for people to report that they personally got it. While it's positive for the common good, and it lets us know about community rates and so on, I think people expect they will be judged a non-zero amount for getting it, and can also trick themselves with plausible deniability because testing is bad ("Probably it was just some other virus, I don't know"). So there's likely some amount of underreporting, correlated with the people who didn't take it seriously in the first place. (If this weren't an issue, I would had said more like 500-1500 of my extended friends and acquaintances.)
And, even if it's true, I have concerns that we acted with appropriate caution in the first few months, but then when more evidence came in and certain things turned out to be unnecessary (e.g. cleaning deliveries, reheating delivery food, etc) I think people stuck those out much too long and some maybe still are.
Nonetheless, my current belief is that rationality did help me and a lot of my 100s of rationalist friends and acquaintances straightforwardly avoid several weeks and months of life lost in expectation, just by doing some basic fermi estimates about the trajectory and consequences of the coronavirus, and reading/writing their info on LessWrong. If you want you and your family to be safe from weird things like this in the future, I think that practicing rationality (and being on LessWrong) is a pretty good way to do this.
(Naturally, being married to an epidemiologist is another good way, but I can only have one spouse, and there are lots of weird problems heading our way from other areas too. Oh for the world where the only problem facing us was pandemics.)
(Also thx, I think I have fixed the links.)
Added: I didn't see your reply to Jacobian before writing this. Feel free to refer me to parts of that.
The best startup people were similarly early, and I respect them a lot for that. If you know of another community or person that publicly said the straightforward and true things in public back in February, I am interested to know who they are and what other surprising claims they make.
I do know a lot of rationalists who put together solid projects and have done some fairly useful things in response to the pandemic – like epidemicforecasting.org and microcovid.org, and Zvi's and Sarah C's writing, and the LW covid links database, and I heard that Median group did a bunch of useful things, and so on. Your comment makes me think I should make a full list somewhere to highlight the work they've all done, even if they weren't successful.
I wouldn't myself say we've pwned covid, I'd say some longer and more complicated thing by default that points to our many flaws while highlighting our strengths. I do think our collective epistemic process was superior to that of most other communities, in that we spoke about it plainly (simulacra level 1) in public in January/February, and many of us worked on relevant projects.
Oli suggests that there are no fields with three-word-names, and so "AI Existential Risk" is not a choice. I think "AI Alignment" is the currently most accurate name for the field that encompasses work like Paul's and Vanessa's and Scott/Abram's and so on. I think "AI Alignment From First Principles" is probably a good name for the sequence.
I think the thing I want here is a better analysis of the tradeoff and when to take it (according to one's inside view), rather than something like an outside view account that says "probably don't".
(And you are indeed contributing to understanding that tradeoff, your first comment indeed gives two major reasons, but it still feels to me true to say about many people in history and not just people today.)
Suppose we plot "All people alive" on the x-axis, and "Probability you should do rationality on your inside view" on the y-axis. Here are two opinions one could have about people during the time of Bacon.
I want to express something more like the second one than the first.
There is information that's dangerous to share. Private data, like your passwords. Information that can be used for damage, like how to build an atom bomb or smallpox. And there will be more ideas that are damaging in the future.
(That said I don't expect your idea is one of these.)
This isn't much of an update to me. It's like if you told me that a hacker broke out of the simulation, and I responded that it isn't that surprising they did because they went to Harvard. The fact that someone did it all is the primary and massive update that it was feasible and that this level of win was attainable for humans at that time if they were smart and determined.
Upvoted, it's also correct to ask whether taking this route is 'worth it'.
I am skeptical of "Moreover, it seems likely that for most people, during most of history, this strategy was the right choice." Remember that half of all humans existed after 1309. In 1561 Francis Bacon was born, who invented the founding philosophy and infrastructure of science. So already it was incredibly valuable to restructure your mind to track reality and take directed global-scale long-term action.
By the end of it, I was asking myself, "if they had this much of rationality figured out back then, why didn't they conquer the world?" Then I looked into the history a bit more and figured out that twoof Xunzi's students were core figures in Qin Shi Huang's unification of China to become the First Emperor.