Rationality Compendium

post by ScottL · 2015-08-23T08:00:11.177Z · LW · GW · Legacy · 5 comments

I want to create a rationality compendium (a collection of concise but detailed information about a particular subject) and I want to know whether you think this would be a good idea. The rationality compendium would essentially be a series of posts that will eventually serve as a guide for less wrong newbies that they can use to discover which resources to look into further, a refresher of the main concepts for less wrong veterans and a guideline or best practices document that will explain techniques that can be used to apply the core less wrong/rationality concepts. These techniques should preferably have been verified to be useful in some way. Perhaps, there will be some training specific posts in which we can track if people are actually finding the techniques to be useful.

I only want to write this because I am lazy. In this context, I mean lazy as it is described by Larry Wall:

Laziness: The quality that makes you go to great effort to reduce overall energy expenditure.

I think that a rationality compendium would not only prove that I have correctly understood the available rationality material, but it would also ensure that I am actually making use of this knowledge. That is, applying the rationality materials that I have learnt in ways that allow me to improve my life.

If you think that a rationality compendium is not needed or would not be overly helpful, then please let me know. I also want to point out that I do not think that I am necessarily the best person to do this and that I am only doing it because I don’t see it being done by others.

For the rationality compendium, I plan to write a series of posts which should, as much as possible, be:

  • Using standard terms: less wrong specific terms might be linked to in the related materials section, but common or standard terminology will be used wherever possible.
  • Concise: the posts should just contain quick overviews of the established rationality concepts. They shouldn’t be introducing “new” ideas. The one exception to this is if a new idea allows multiple rationality concepts to be combined and explained together. If existing ideas require refinement, then this should happen in a seperate post which the rationality compendium may provide a link to if the post is deemed to be high quality.
  • Comprehensive: links to all related posts, wikis or other resources should be provided in a related materials section. This is so that readers can deep dive or just go deeper on materials that pique their interest while still ensuring that the posts are concise. The aim of the rationality compendium is to create a resource that is a condensed and distilled version of the available rationality materials. This means that it is not meant to be light reading as a large number of concepts will be presented in one post.
  • Collaborative: the posts should go through many series of edits based on the feedback in the comments. I don't think that I will be able to create perfect first posts, but I am willing to expend some effort to iteratively improve the posts until they reach a suitable standard. I hope that enough people will be interested in a rationality compendium so that I can gain enough feedback to improve the posts. I plan for the posts to stay in discussion for a long time and will possibly rerun posts if it is required. I welcome all kinds of feedback, positive or negative, but request that you provide information that I can use to improve the posts.
  • Be related only to rationality: For example, concepts from AI or quantum mechanics won’t be mentioned unless they are required to explain some rationality concepts.
  • Ordered: the points in the compendium will be grouped according to overarching principles. 
I will provide a link to the posts created in the compendium here:
  1. A rational agent, given its capabilities, is one that thinks and acts optimally

5 comments

Comments sorted by top scores.

comment by [deleted] · 2015-08-23T13:40:03.755Z · LW(p) · GW(p)

I think this is an excellent idea. Finally, someone is rewriting concepts from the sequences in normal language - not the idiosyncratic language of the sequences, nor the mathematical symbols at Intelligent Agents Forum.

comment by MarsColony_in10years · 2015-08-24T16:46:38.948Z · LW(p) · GW(p)

If you intend to write a compendium, I would suggest trying to stress some of the aspects of rationality that perhaps weren't given as much time as they deserved in the Sequences. For example, recently in a comment on Effective Altruism forum, someone wrote a critique of the focus on the Bayes’ theorem component of rationality:

Bayesianism is not rationality. It's a particular mathematical model of rationality. I like to analogize it to propositional logic: it captures some important features of successful thinking, but it's clearly far short of the whole story.

We need much more sophisticated frameworks for analytical thinking. This is my favorite general purpose approach, which applies to mixed quant/qual evidence, and was developed by consideration of cognitive biases at the CIA:

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art11.html

But of course this isn't rationality either. It's never been codified completely, and probably cannot be.

Honestly, I was struck more by the similarities than differences between The Sequences and the CIA's guide to qualitative analysis of competing hypotheses. However, they do put more stress on things like competing hypotheses and "diagnostic value". I'm not sure if the Sequences even mention a process for determining how sensitive conclusions are to faulty data, but it definitely should. I can't find the name for it at the moment, (and the CIA's guide doesn't name it either) but I know there is a mathematical technique for complex functions like climate models where researchers will test the sensitivity of the model to assumptions by varying the inputs and looking at whether the models output is invariant.

A desire for a more evidence-based focus was voiced here, although I think we’re not all that far off the mark when you take a broad view. It’s mostly a matter of emphasizing a few key points and playing down a few others that we’ve over-emphasized.

Replies from: ScottL
comment by ScottL · 2015-08-25T04:04:47.699Z · LW(p) · GW(p)

Thanks for the link to the CIA book. It looks really good. I only briefly looked at it. Maybe, I will create a seperate post where I create and describe some framework in detail. If that post gets liked enough, then I will provide a link to it in the compendium. Do you have any other resources in regards to what a potential framework should look like?

Replies from: MarsColony_in10years
comment by MarsColony_in10years · 2015-08-30T05:36:23.764Z · LW(p) · GW(p)

I wish I had more sources, but honestly that's about it. I'll certainly comment on future posts if I feel I have something useful to add, though.

comment by imuli · 2015-08-25T17:31:44.694Z · LW(p) · GW(p)

We're a long way from having any semblance of a complete art of rationality, and I think that holding on to even the names used in the greater less wrong community is a mistake. Good names for concepts are important, and while it may be confusing in the short term while we're still developing the art, we are able to do better if we don't tie ourselves to the past. Put the old names at the end of the entry, or under a history heading, but pushing the innovation of jargon forward is valuable.