The Feedback Problem
post by Elo · 2018-07-29T23:54:13.059Z · LW · GW · 4 commentsContents
4 comments
My Blog Bearlamp: First, Previous
Lesswrong [LW · GW]: First [LW · GW], Previous [LW · GW] -- Greaterwrong [LW · GW]: First [LW · GW], Previous [LW · GW]
“Let me practice my free throw from different distances so that I can throw well wherever I am in the game”. Anders Ericsson (10,000 hours guy) in Peak, talks about deliberate practice. He also talks about the difficulty in getting feedback.
Feedback is the hardest part of learning anything. Good feedback and you can go from chopsticks to beethoven in simple steps. Technically speaking, the “hard part” of the skill is not the part which requires you to physically press the buttons on a piano. Or the part that requires you to work out how to move the piece in tetris to where you want it to go. The part of the chess game that is moving the piece to the next location on the board. The part of poker that is reading the cards and knowing which ones you have.
Most games have a basic level of skill that isn’t that hard. Anyone can play tennis provided they can hold a racket and swing.. Okay maybe you need eyeballs and the ability to move around a court too but the barrier isn’t much higher than that. Some skills require more, unicycle actually takes balance, that might take longer to learn, and some games are complicated like this too.
Bad feedback is also useful. From the books, How to Measure Anything, Superforecasters, and everyone in the quantified self movement… Even a shitty piece of feedback has a Value of Information that can be valuable. One of my favourite poor pieces of feedback is when I added to my Self Form, “did I stick to my diet today? yes/no/maybe”. Like magic for a month I stuck to my diet and I lost 2kg. One good clean feedback measure and I made leaps. (Sure enough, eventually other problems got in my way, but it was a good start.)
The feedback problem asks, “How would I know if I am improving?”. For a musician, that might be to recording yourself playing then listen back to what you sound like. For a farmer, that might be to weigh or count the crop and compare that to last year. For a scientist that might be repeated tests for reliability, and for someone with an emotionally trauma history that might look like “I don’t feel terrible”.
Next: Emotional training model
4 comments
Comments sorted by top scores.
comment by Apollo13 · 2018-10-03T04:23:15.295Z · LW(p) · GW(p)
I actually find that, when I do a performance that I find is mediocre feedback that says, "That was good." is much worse than feedback that says, "This specific thing was bad" and it's even better if it says, "This specific thing was bad and here's a way to fix it/compensate". Specific critique is more helpful to me than general critique - no matter the positive or negative implications. Things like "Do this more" "Stop doing this" and "Do this instead of that" are, to me the most helpful kinds of feedback. (The question of which comments to execute and how to execute each is my problem to deal with though)
comment by avturchin · 2018-07-30T11:53:10.960Z · LW(p) · GW(p)
That is why a bad review from a reviewer in a peer-reviewed journal is better than a negative or zero carma on a LW post. Reviewer is obliged to find all errors. Reader of a post just ignores it, if he finds the post not interesting enough to engage. If the reader engages in commenting, he will not search for all errors, but just would peak a point which is interesting to him. Zero-carma post provide very little knowledge about how to improve the next post, except: "something is wrong, try next time differently".
In some forums, it is partly solved by "actionable downvotes", where any downvote should be explained from a preset of types (like: dangerous, wrong etc), like on Longecity, or even in as a plain text.
For this reason ideas like "open reviewing" are not working very well, and we still need traditional scientific journals.
Replies from: Dacyn↑ comment by Dacyn · 2018-07-30T18:32:13.201Z · LW(p) · GW(p)
Reviewer is obliged to find all errors.
Not true. A reviewer's main job is to give a high-level assessment on the quality of a paper. If the assessment is negative then usually they do not look for all specific errors in the paper. A detailed list of errors is more common when the reviewer recommends the journal to accept the paper (since then the author(s) can edit the paper and then publish in the journal) but still many reviewers do not do this (which is why it is common to find peer-reviewed papers with errors in them).
At least, this is the case in math.
Replies from: avturchin