Posts
Comments
I got into the idea of deliberately developmental organizations. Are DDOs a good idea? I still think probably yes, but they're easy to get wrong. What's important is that I spent a lot of time thinking about and then experimenting with ways to affect the culture of the organization and thereby understanding how organizations work.
What have you found in your experiments, in terms of what helps or hurts in developing DDO culture?
If it's all about prediction, why do poor team still have fans?
2 years later, I'd still be interested in your model if you're willing to share it.
I can't shake the feeling that throughout the book Sowell tries to make a case for a more right-wing/free-market point of view without admitting it, albeit in the most eloquent manner.
Did you find any of his political claims to be dubious?
FYI this link doesn't go anywhere
Here's a link to the book's Goodreads page
I really like the idea of doing a pre-mortem here.
Suppose you and I have two different models, and my model is less wrong than yours. Suppose that my model assigns a 40% probability to event X, and your model assigns a 60%, we disagree and bet, and event X happens. If I had an oracle over the true distribution of X, my write-up would consist of saying "this falls into the 40% of cases, as predicted by my model", which doesn't seem very useful. In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want.
This approach might lead to over updating on single bets. You'd need to record your bets, and the odds on those bets over time to see how calibrated you were. If your calibration over time is poor, then you should be updating your model. Perhaps we can weaken the suggestion in the post to writing a post-mortem on why you may be wrong. Then when you reflect over multiple bets over time, you could try to tease out common patterns and deficits in your model making.
Interesting about ultralearning, I will need to skim that in more detail some point. Without spaced repetition/incremental reading, that looks like the best method of learning to me.
His book touches on spaced repetition (he's a big proponent of the testing effect) and other things. It's really about how to put together effective learning projects, from the research phase, through execution.
Regarding SuperMemo, yes, I use the software and incremental reading extensively (if you have an interest in learning it, I would happily teach you).
I am interested in IR, but I don't have a windows machine (MacOS/Linux) and don't think the overhead of maintaining a VM would be worth it. Do you IR everything you read online, or do you reserve it for materials in your field? I mostly take notes in roam, and add particularly salient things that I think I'll want to remember to anki.
I also subscribe heavily to Woz's ideas. I like them because they tend to be much closer to global maximas (e.g. free running sleep) because societal/academic norms do not restrict his views.
Noted. The SuperMemo wiki has always seemed quite unwieldy to me, but I'll take closer to what he says to say on topics outside of spaced repetition.
1. you know what you don't know so if you need some preceding information you can find that for yourself (in large part thanks to the internet)
2. teaching is centered around the idea that a teacher knows what you should know better than you do. In many cases, I don't think this makes much sense. If I want to learn how to make x thing, getting a general education on the field x falls into (field y) doesn't make sense. Learning a bunch of useless things in field y is a waste of my time. If I'm deciding what to learn by myself, I can make sure that I'm not only learning things efficiently but that I'm choosing what to learn effectively.
This is the approach advocated by Scott Young in the book Ultralearning. You build out a learning project for the thing you actually want to learn, learn by doing, and you fill in obvious gaps that are 'rate-limiting' to the learning 'reaction' as you go along. Learning by working on the end result that you actually want directly also sidesteps the issues with transfer learning - students are typically able to apply the abstract classroom skills they've been taught to real world situations.
I see you link to SuperMemo and ask about it a lot. Do you use that software, and do you generally subscribe to Wozniak's ideas?
I think that's a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and "making beliefs pay rent", which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes.
As someone who landed on your comment specifically by searching for what LW has said about software engineering in particular, I'd love to read more about your methods, experiences, and thoughts on the subject. Have you written about this anywhere?
Thanks for this, it's a good unifying summary on systemization that I felt was valuable in addition to reading the Systemization chapter in the CFAR Handbook.
Another thing that falls into the 'spend your money to conserve attention category' is hiring a personal assistant. A fellow CFAR alum convinced me to try it out, and it's definitely effective. I fell out of using my PA, but that is something I want to revisit, possibly when I have more money.
Automatically donate money.
This might be bad because giving Tuesday exists.
Is this out of fear of missing out on matching donations? If so, you could just set up your recurring donations to coincide with giving Tuesday. Like you say later in the article, find ways to make the decision to give only once (or as few times as possible).
Checklists
Neat, I had 'search for what LW has to say about checklists' on my Todo list, but I'd never made an explicit connection between them and systemization. I've added some checklists to my list of candidate systems, as well as systemizing updating them when they fail (of course, you could incorporate techniques like Murphyjitsu in writing your checklists too!)
Always hang your backpack on the command hook by your door.
Great idea, ordering a command hook now.
A powerful form of systemization is a systematic way to generate systems. How to construct these is beyond the scope of this post, but I just want to take a moment to shill Getting Things Done as an amazing meta-system.
It's not obvious to me how GTD generates systems. Could you elaborate here?
Yes, I used Anki in college for a range of different courses. It made memorization based courses (art history) an absolute breeze, and helped me build my conceptual tower for advanced math courses. Spaced repetition is quite useful for remembering things. I recommend reading this article by Michael Nielsen, alongside the comprehensive reference from Gwern.
I'm skeptical of the value of Readwise, because it is so passive. I think part of the value of using SRS programs like Anki comes from formulating good questions and structuring your knowledge into atomic facts. You need to have at least some understanding of the material in order to be able to make good flashcards. Flashcards that are questions or cloze deletions have a built in feedback mechanism: did I answer the question correctly, and if so, how difficult was recalling the answer? I don't think being shown things that I highlighted while reading is going to help me learn the material well. If you just want to be reminded of some concepts or a beautiful passage periodically, it should work well.
Murray has a new book out, Human Diversity, so that may be a good place to start.
Thank you for writing such a clear article on the issue. Cleared up my confusion around EMH, and especially how it differs from the random walk hypothesis. I'll definitely reference this article when people bring up EMH.
specifically focused on doing planks, an exercise that's far more intellectually challenging than physically challenging.
How are planks intellectually challenging? They certainly present great physical challenge, so this is an interesting claim.
If however, you’ve developed more stoic thinking patterns and ask yourself “I made a mistake, but that’s already happened so instead of regretting I’m going to focus on what I can do to avoid that mistake in the future”, you’ll also likely have body language and speech that doesn’t communicate regret in the same way. Sometimes people will recognize that you are still aware of your mistake but are approaching it from a different angle, especially if they already know you, but don’t count on it.
This seems like it could be mitigated with clear communication. You may not appear outwardly to be sad to teammates, but you hold your hand up and admit that you made a mistake.
As a student, did you experience any particular frustrations with this approach?
Retweet Trump with comment.
What is the error that you're implying here?
A simple example is debugging code: a gears-level approach is to try and understand what the code is doing and why it doesn't do what you want, a black-box approach is to try changing things somewhat randomly.
To drill in further, a great way to build a model of why a defect arises is using the scientific method. You generate some hypothesis about the behavior of your program (if X is true, then Y) and then test your hypothesis. If the results of your test invalidate the hypothesis, you've learned something about your code and where not to look. If your hypothesis is confirmed, you may be able to resolve your issue, or at least refine your hypothesis in the right direction.
There is some irony in the author's insistence that Musk is excellent because of his exceptional software, not his hardware. How could the author possibly know this, or be able to separate out the effect of Musk's raw intellectual horespower, and his critical reasoning skills?
I did find this post quite inspirational, although I do wonder how the author came up with the Want box / Reality box / strategy box model. It doesn't seem like Musk explicitly gave this model to the author.