Posts

Comments

Comment by DaveEtCircenses on D&D.Sci September 2022: The Allocation Helm · 2022-09-26T17:41:15.207Z · LW · GW

A solution by method of "Thrash with linear regression, then get bored". I also make the (completely unsubstantiated) claim that an even split of students across houses will lead to better results.

Humblescrumble gets A,B,E,R and T.

Dragonslayer gets D,G,H,K and N.

Thought-Talon gets C,F,L*,M and Q*.

Serpentyne gets I*,J*,O,P and S*.

(Students marked * get a slightly better linear score in another House, but I balance the sizes)

Comment by DaveEtCircenses on Turbocharging · 2022-08-03T16:52:48.482Z · LW · GW

I think the Law of Equal but Opposite Advice is extremely relevant here, in that there are two common failure modes for practicing.

The first of these is "not practicing what you actually do", and turbocharging helps with that.

The second of these is "practicing what you actually do, but inefficiently", and deliberate practice helps with that.

Of course, trying too hard to avoid the first failure mode yields the second (e.g. playing a whole piano piece through repeatedly), and trying too hard to avoid the second failure mode yields the first (e.g. memorising Anki flashcards for a language, but being unable to speak it since you didn't practice talking).

Comment by DaveEtCircenses on What's the big deal about Bayes' Theorem? · 2021-01-26T20:43:40.940Z · LW · GW

For your first half-question, "A Technical Explanation of Technical Explanation" [Edit: added link] sums up the big deal; Bayes Theorem is part of what actually underpins how to make a map reflect the territory (given infinite compute) - it is a necessary component. In comparison with other necessary components required to do this (i.e. logic, math, other basic probability) I would conjecture that Bayes is only special in that it is 'often' the last piece of the puzzle that is assembled in someone's mind, and thus takes on psychological significance. 

 

For the second half-question, I think using Bayes in life is more about understanding just how much priors matter than about actually crunching the numbers.

As an example, commenters on LessWrong will sometimes refer to 'Outside View' vs 'Inside View'. 

A quick example to summarise roughly what these mean. If predicting how long 'Project X' will take, an Outside View goes 'well, project A took 3 weeks, and project B took 4, and project C took 3, and this project is similar, so 3-4 weeks', whereas the Inside View goes 'well to do project X, I need to do subprojects Y1, Y2, Y3 and Y4, and these should take me 3, 4, 5 and 4 days respectively, so 16 days = 2 and a bit weeks'. The Inside View is susceptible to the planning fallacy, etc etc. 

General perspective: Outside View Good, Inside View Can Go Very Wrong.

 

You've probable guessed the punchline; this is really about Bayes Theorem. The Outside View goes "I'm going to use previous projects as my prior (technically this forms a prior distribution of estimated project lengths), and then just go with that and try to avoid updating very much, because I have a second prior that says 'all the details of similar projects didn't matter in the past, so I shouldn't pay too much attention to them now", whereas the Inside View Going Very Wrong is what happens when you throw out your priors; you can end up badly calibrated very quickly.