Posts

Comments

Comment by Panashe Fundira (panashe-fundira) on Becoming a Staff Engineer · 2022-05-22T23:29:04.069Z · LW · GW

I got into the idea of deliberately developmental organizations. Are DDOs a good idea? I still think probably yes, but they're easy to get wrong. What's important is that I spent a lot of time thinking about and then experimenting with ways to affect the culture of the organization and thereby understanding how organizations work.

What have you found in your experiments, in terms of what helps or hurts in developing DDO culture?

Comment by Panashe Fundira (panashe-fundira) on Televised sports exist to gamble with testosterone levels using prediction skill · 2021-12-03T03:13:54.522Z · LW · GW

If it's all about prediction, why do poor team still have fans?

Comment by Panashe Fundira (panashe-fundira) on 87,000 Hours or: Thoughts on Home Ownership · 2021-09-30T08:59:30.993Z · LW · GW

2 years later, I'd still be interested in your model if you're willing to share it.

Comment by Panashe Fundira (panashe-fundira) on Book review: Knowledge and Decisions by Thomas Sowell · 2021-09-07T00:53:33.772Z · LW · GW

I can't shake the feeling that throughout the book Sowell tries to make a case for a more right-wing/free-market point of view without admitting it, albeit in the most eloquent manner.

Did you find any of his political claims to be dubious?

Comment by Panashe Fundira (panashe-fundira) on What technical-ish books do you recommend that are readable on Kindle? · 2021-01-16T19:24:08.796Z · LW · GW

FYI this link doesn't go anywhere

Here's a link to the book's Goodreads page

Comment by Panashe Fundira (panashe-fundira) on Betting with Mandatory Post-Mortem · 2020-07-02T13:09:43.612Z · LW · GW

I really like the idea of doing a pre-mortem here.

Comment by Panashe Fundira (panashe-fundira) on Betting with Mandatory Post-Mortem · 2020-06-28T22:35:35.794Z · LW · GW

Suppose you and I have two different models, and my model is less wrong than yours. Suppose that my model assigns a 40% probability to event X, and your model assigns a 60%, we disagree and bet, and event X happens. If I had an oracle over the true distribution of X, my write-up would consist of saying "this falls into the 40% of cases, as predicted by my model", which doesn't seem very useful. In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want.


This approach might lead to over updating on single bets. You'd need to record your bets, and the odds on those bets over time to see how calibrated you were. If your calibration over time is poor, then you should be updating your model. Perhaps we can weaken the suggestion in the post to writing a post-mortem on why you may be wrong. Then when you reflect over multiple bets over time, you could try to tease out common patterns and deficits in your model making.

Comment by Panashe Fundira (panashe-fundira) on Effective children education · 2020-06-07T14:15:48.860Z · LW · GW
Interesting about ultralearning, I will need to skim that in more detail some point. Without spaced repetition/incremental reading, that looks like the best method of learning to me.

His book touches on spaced repetition (he's a big proponent of the testing effect) and other things. It's really about how to put together effective learning projects, from the research phase, through execution.

Regarding SuperMemo, yes, I use the software and incremental reading extensively (if you have an interest in learning it, I would happily teach you).

I am interested in IR, but I don't have a windows machine (MacOS/Linux) and don't think the overhead of maintaining a VM would be worth it. Do you IR everything you read online, or do you reserve it for materials in your field? I mostly take notes in roam, and add particularly salient things that I think I'll want to remember to anki.

I also subscribe heavily to Woz's ideas. I like them because they tend to be much closer to global maximas (e.g. free running sleep) because societal/academic norms do not restrict his views.

Noted. The SuperMemo wiki has always seemed quite unwieldy to me, but I'll take closer to what he says to say on topics outside of spaced repetition.

Comment by Panashe Fundira (panashe-fundira) on Effective children education · 2020-06-05T21:10:16.342Z · LW · GW
1. you know what you don't know so if you need some preceding information you can find that for yourself (in large part thanks to the internet)
2. teaching is centered around the idea that a teacher knows what you should know better than you do. In many cases, I don't think this makes much sense. If I want to learn how to make x thing, getting a general education on the field x falls into (field y) doesn't make sense. Learning a bunch of useless things in field y is a waste of my time. If I'm deciding what to learn by myself, I can make sure that I'm not only learning things efficiently but that I'm choosing what to learn effectively.

This is the approach advocated by Scott Young in the book Ultralearning. You build out a learning project for the thing you actually want to learn, learn by doing, and you fill in obvious gaps that are 'rate-limiting' to the learning 'reaction' as you go along. Learning by working on the end result that you actually want directly also sidesteps the issues with transfer learning - students are typically able to apply the abstract classroom skills they've been taught to real world situations.


I see you link to SuperMemo and ask about it a lot. Do you use that software, and do you generally subscribe to Wozniak's ideas?

Comment by Panashe Fundira (panashe-fundira) on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2020-05-01T03:10:44.271Z · LW · GW
I think that's a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and "making beliefs pay rent", which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes.

As someone who landed on your comment specifically by searching for what LW has said about software engineering in particular, I'd love to read more about your methods, experiences, and thoughts on the subject. Have you written about this anywhere?

Comment by Panashe Fundira (panashe-fundira) on Training Regime Day 10: Systemization · 2020-04-30T17:44:36.912Z · LW · GW

Thanks for this, it's a good unifying summary on systemization that I felt was valuable in addition to reading the Systemization chapter in the CFAR Handbook.

Another thing that falls into the 'spend your money to conserve attention category' is hiring a personal assistant. A fellow CFAR alum convinced me to try it out, and it's definitely effective. I fell out of using my PA, but that is something I want to revisit, possibly when I have more money.

Automatically donate money.
This might be bad because giving Tuesday exists.

Is this out of fear of missing out on matching donations? If so, you could just set up your recurring donations to coincide with giving Tuesday. Like you say later in the article, find ways to make the decision to give only once (or as few times as possible).

Checklists

Neat, I had 'search for what LW has to say about checklists' on my Todo list, but I'd never made an explicit connection between them and systemization. I've added some checklists to my list of candidate systems, as well as systemizing updating them when they fail (of course, you could incorporate techniques like Murphyjitsu in writing your checklists too!)

Always hang your backpack on the command hook by your door.

Great idea, ordering a command hook now.

A powerful form of systemization is a systematic way to generate systems. How to construct these is beyond the scope of this post, but I just want to take a moment to shill Getting Things Done as an amazing meta-system.

It's not obvious to me how GTD generates systems. Could you elaborate here?

Comment by Panashe Fundira (panashe-fundira) on Do you trust the research on handwriting vs. typing for notes? · 2020-04-29T15:54:52.974Z · LW · GW

Yes, I used Anki in college for a range of different courses. It made memorization based courses (art history) an absolute breeze, and helped me build my conceptual tower for advanced math courses. Spaced repetition is quite useful for remembering things. I recommend reading this article by Michael Nielsen, alongside the comprehensive reference from Gwern.


I'm skeptical of the value of Readwise, because it is so passive. I think part of the value of using SRS programs like Anki comes from formulating good questions and structuring your knowledge into atomic facts. You need to have at least some understanding of the material in order to be able to make good flashcards. Flashcards that are questions or cloze deletions have a built in feedback mechanism: did I answer the question correctly, and if so, how difficult was recalling the answer? I don't think being shown things that I highlighted while reading is going to help me learn the material well. If you just want to be reminded of some concepts or a beautiful passage periodically, it should work well.

Comment by Panashe Fundira (panashe-fundira) on Popular papers to be scrutinized? · 2020-04-16T06:02:25.613Z · LW · GW

Murray has a new book out, Human Diversity, so that may be a good place to start.

Comment by Panashe Fundira (panashe-fundira) on An Equilibrium of No Free Energy · 2020-04-04T05:09:20.931Z · LW · GW

Thank you for writing such a clear article on the issue. Cleared up my confusion around EMH, and especially how it differs from the random walk hypothesis. I'll definitely reference this article when people bring up EMH.

Comment by Panashe Fundira (panashe-fundira) on Some Simple Observations Five Years After Starting Mindfulness Meditation · 2020-01-30T05:00:57.505Z · LW · GW
specifically focused on doing planks, an exercise that's far more intellectually challenging than physically challenging.

How are planks intellectually challenging? They certainly present great physical challenge, so this is an interesting claim.

Comment by Panashe Fundira (panashe-fundira) on Meditation Trains Metacognition · 2020-01-25T08:54:49.283Z · LW · GW

Here's an updated link to the Sedlmeier meta-analysis

Comment by Panashe Fundira (panashe-fundira) on Stoicism: Cautionary Advice · 2020-01-25T07:26:54.052Z · LW · GW
If however, you’ve developed more stoic thinking patterns and ask yourself “I made a mistake, but that’s already happened so instead of regretting I’m going to focus on what I can do to avoid that mistake in the future”, you’ll also likely have body language and speech that doesn’t communicate regret in the same way. Sometimes people will recognize that you are still aware of your mistake but are approaching it from a different angle, especially if they already know you, but don’t count on it.

This seems like it could be mitigated with clear communication. You may not appear outwardly to be sad to teammates, but you hold your hand up and admit that you made a mistake.

Comment by Panashe Fundira (panashe-fundira) on Bayesian examination · 2019-12-11T21:24:59.036Z · LW · GW

As a student, did you experience any particular frustrations with this approach?

Comment by Panashe Fundira (panashe-fundira) on Is Rationalist Self-Improvement Real? · 2019-12-11T21:06:18.537Z · LW · GW
Retweet Trump with comment.

What is the error that you're implying here?

Comment by Panashe Fundira (panashe-fundira) on Gears-Level Models are Capital Investments · 2019-12-11T20:37:08.301Z · LW · GW
A simple example is debugging code: a gears-level approach is to try and understand what the code is doing and why it doesn't do what you want, a black-box approach is to try changing things somewhat randomly.

To drill in further, a great way to build a model of why a defect arises is using the scientific method. You generate some hypothesis about the behavior of your program (if X is true, then Y) and then test your hypothesis. If the results of your test invalidate the hypothesis, you've learned something about your code and where not to look. If your hypothesis is confirmed, you may be able to resolve your issue, or at least refine your hypothesis in the right direction.

Comment by Panashe Fundira (panashe-fundira) on Link: The Cook and the Chef: Musk's Secret Sauce - Wait But Why · 2019-12-11T05:06:51.727Z · LW · GW

There is some irony in the author's insistence that Musk is excellent because of his exceptional software, not his hardware. How could the author possibly know this, or be able to separate out the effect of Musk's raw intellectual horespower, and his critical reasoning skills?

I did find this post quite inspirational, although I do wonder how the author came up with the Want box / Reality box / strategy box model. It doesn't seem like Musk explicitly gave this model to the author.