What currents of thought on LessWrong do you want to see distilled?
post by ryan_b · 2021-01-08T21:43:33.464Z · LW · GW · No commentsThis is a question post.
Contents
Answers 19 Daniel Kokotajlo 13 Raemon 11 Felix Karg 9 interstice 9 Ikaxas 7 alex_lw 7 crl826 4 lincolnquirk 3 Daniel Abolafia 3 Polytopos None No comments
The question is inspired by a few comments and a question I have seen recently. The first is a discussion in the 2019 Review [LW(p) · GW(p)] post on the subject of research debt; the second a question from johnswentworth asking what people's confusions are about simulacra [LW · GW] (which I interpret to be a 'what do you want from this distillation' question).
The question is what it says in the title, but I would like to add that there is no expiration. For example, I recently saw cryogenics back in the posts and questions, which had fallen off the activity radar for years. So old currents of thought are valid candidates, even if the real goal is a re-distillation in light of new developments in the field or all the accumulated communication technique we've considered on LessWrong.
So please describe the current of thought, and your reason for wanting a distillation. The authors may be called to action, or alternatively following Bridgett Kay's suggestion [LW(p) · GW(p)] someone else may take up the challenge.
Answers
I'd like to see an epic post that collects high-quality examples of COVID incompetency on the part of the US government, the FDA, the CDC, the WHO, bioethics, etc. Zvi's posts contain many examples but they are spread out over multiple posts and not fact-checked.
It would be really valuable to have a post that collects all this stuff in one place, and curates only the really compelling examples, and puts in all the proper citations and footnotes and explanatory arguments. I would link to it all the time, because it's important evidence about the general competence of our civilization, and our government in particular. EDIT: The post should also steelman the institution's decisions.
↑ comment by Sherrinford · 2021-01-11T16:07:08.433Z · LW(p) · GW(p)
To "fact-checked" and "compelling examples" etc, I would add the request that it would actually try to steelman these institutions' actions.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-11T17:30:10.193Z · LW(p) · GW(p)
Yes, agreed.
↑ comment by Bob Baker · 2021-01-10T16:33:30.452Z · LW(p) · GW(p)
The culture of the FDA didn't spring into being this year, of course. This book covers their failures to regulate foreign manufacturing of generic drugs and it substantially dented my previous belief that generic versions of drugs are equal to the branded. The book is, however, about twice as long as it should be and you may prefer this podcast with the author.
Replies from: ChristianKl↑ comment by ChristianKl · 2021-01-10T20:53:31.959Z · LW(p) · GW(p)
Before writing the book Katherine Eban wrote an article for CNN Money on Ranbaxy: http://www.sacw.net/article4564.html
(The fact that CNN doesn't host the article anymore it's worth noting; it seems someone got them to take it down)
I think I could use periodic distillation of AI Alignment paradigm development for purposes of staying roughly abreast of the entire space.
I've observed a bunch of evolving paradigms or research agendas over the past couple years that I sometimes do a shallow dive into, but there's a lot going on and it's a bit disorienting.
A post that's like "here's the current state of Iterated Amplication and Distillation", "here's the current state of what MIRI folk think about corrigibility", "here's the current state of what Paul thinks about corrigibility", "Here's the state of AI Safety via Debate" (or, whatever paradigms currently have at least some major proponents)
While I am quite the fan of current 'idea'-LessWrong, I would love to see a collection of actionable rationality exercises, especially about core concepts such as those from the sequences.
Explanations should be mainly for non-rationalists. This could be a go-to to forward to people roughly interested in the topic, but without the time to read through the 'theoretical' posts. Think Hammertime [? · GW] but formulated specifically for non-'formal'-rationalists. Doing them should be entertaining and result in an intuitive understanding of the same concepts, and deeper understanding through having applied them.
Basically, more exercises. More explicit application.
I'm not sure if this counts as a 'distillation', but I'd like to see a good overview/history of UDASSA [LW · GW]/UDT [LW · GW] as approaches to anthropics and metaphysics [LW · GW]. I think this is probably the single most significant piece of intellectual progress produced by LW, besides the arguments for AI x-risk. And yet, most users seem to be unaware, judging by the periodic independent [LW · GW] re-discoveries [LW · GW] of some of the ideas.
(I guess people are familiar with UDT as an acausal decision theory, but I think the applications to anthropics and metaphysics are less well-known, and IMO more interesting)
One thing I would like distilled is Eliezer's metaethics sequence [? · GW]. (I might try to do this one myself at some point.)
Another is a discussion of how the literature on biases has held up to the replication crisis (I know the priming stuff has fallen, but how much does that taint? What else has replicated/failed to replicate?)
↑ comment by Kaveh-Sedghi · 2021-01-09T03:28:24.207Z · LW(p) · GW(p)
Yes, I am quite hesitant to leap into the Sequences, not knowing how valid the studies cited are. I too would like to know how some of the mainstay LW concepts fare in light of the replication crisis (as psychology especially has faced a lot of problems in that regard).
Would be great to see anti-aging research investigated in more details. On one hand, many people in and around ratiosphere seem to believe we're about to see some tangible progress in slowing aging in humans soon (decade or two), see e.g. this great post [LW · GW] by JackH recently. On the other hand, other people in this sphere and also in biotech argue that we're quite a distance away from the point where dedicated anti-aging research makes sense and for now focus should be on fundamental biology. I'd be happy to see up-to-date evidence for these two positions evaluated side by side.
And as a potentially separate tread(s) of thoughts, what the ramifications of each being true would be for people interested in the general area of not dying, how it can affect one's lifestyle, donations, career choices etc.
I'd love to see more explorations of the connections/overlaps/gaps/disagreements between Moral Mazes, Dictator's Handbook, and Gervais Principle like this recent post [LW · GW]. I've started hacking away at it in my shortform, but would love some help/an excuse to quit.
↑ comment by Yoav Ravid · 2021-01-10T16:41:03.622Z · LW(p) · GW(p)
I'm currently working on a post distilling Selectorate Theory (i.e, dictator's handbook, though I'm basing it more on their older book, the logic of political survival). i probably won't touch on its interaction with other theories in that post, but once there's a good summery of it on the site it would be easier to have a discussion about how it combines with other ideas.
Replies from: crl826↑ comment by crl826 · 2021-01-10T18:15:39.644Z · LW(p) · GW(p)
Awesome. Looking forward to it.
I've also put summaries of 3 out of 6 chapters of Gervais Principle on my shortform. (The other chapters frankly weren't that interesting to me).
Working on a summary of Moral Mazes right now. Not sure if I will post it since, we already have Zvi's version.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2021-10-12T11:44:07.081Z · LW(p) · GW(p)
Took me quite long, but the selectorate theory post is finally published [LW · GW]! :)
Replies from: crl826I have a sense that we could collate thought and interest on "how to live better at home". Setting up super-bright lighting, tips for finding friends, partners & raising kids, cohousing & coparenting, choosing places to move to en masse: it's about your existence at the place you spend most of your time, and co-existence with people you love so much that you live with them.
The arguments for Bayesian epistemology embodying rationality. It would be helpful to see this position elucidated all in one place.
tristanm [LW · GW] writes "I’ve noticed a recent trend towards skepticism of Bayesian principles and philosophy ... which I have regarded with both surprise and a little bit of dismay, because I think progress within a community tends to be indicated by moving forward to new subjects and problems rather than a return to old ones that have already been extensively argued for and discussed." These extensive arguments and discussions about this topic seem to be scattered about LW and other sites. It would help Bayesian proponents to have a standard sequence to point to, especially if they think the issue is settled.
I am a fan of johnswentworth's gears sequence [? · GW]. It would be fruitful to have this distilled.
It would be good to have some some polling of which gears are tight and slack in the present state of the world for various big projects of interest to LessWrong members. e.g., for AGI research, for ethics, for various sciences, for societal progress, etc.
No comments
Comments sorted by top scores.