List of Probability Calibration Exercises
post by Isaac King (KingSupernova) · 2022-01-23T02:12:41.798Z · LW · GW · 12 commentsContents
12 comments
I recently spent a while looking at how different people had designed their probability calibration exercises (for ideas on how to design my own), and they turned out to be quite difficult to find. Many of the best ones were the least advertised and hardest to locate online. I figured I'd compile them all here in case anyone else finds themselves in a similar position. Please let me know about any I missed and I'll add them to the post. Many of these are old and no longer maintained, so no guarantees as to quality.
http://acritch.com/credence-game/
http://confidence.success-equation.com/
https://calibration-practice.neocities.org/
http://web.archive.org/web/20100529074053/http://www.acceleratingfuture.com/tom/?p=129
http://credencecalibration.com/
https://programs.clearerthinking.org/calibrate_your_judgment.html
https://www.openphilanthropy.org/calibration or https://80000hours.org/calibration-training/ (Different URLs for same application.)
https://calibration.lazdini.lv/
http://web.archive.org/web/20161020032514/http://calibratedprobabilityassessment.org/
https://predictionbook.com/credence_games/try
https://calibration-training.netlify.app/
https://play.google.com/store/apps/details?id=com.the_calibration_game
https://www.metaculus.com/tutorials/
https://outsidetheasylum.blog/probability-calibration/
https://peterattiamd.com/confidence/
http://quantifiedintuitions.org/calibration
12 comments
Comments sorted by top scores.
comment by Adam Zerner (adamzerner) · 2022-01-23T05:26:56.306Z · LW(p) · GW(p)
Funny timing. I'm actually in the process of working on https://calibration-training.netlify.app/ and am planning to post some sort of initial alpha release sort of thing to LessWrong soon! I need to seed the database with more questions first though. Right now there are only 10. I have a script and approach that should make it easy enough to get tens of thousands soon enough. This is helpful though. I'll look through the existing resources and see if there's anything I can use to improve my app.
comment by Jozdien · 2022-01-23T13:57:19.054Z · LW(p) · GW(p)
I made an Android app based on http://acritch.com/credence-game/ you can find here.
And funny timing for me too, I just hosted a web version of the Aumann Agreement Game at https://aumann.io/ (Most likely more riddled with bugs than a dumpster mattres) last week and was holding off testing it until I had some free time to post about it.
Replies from: KingSupernova↑ comment by Isaac King (KingSupernova) · 2022-01-23T21:09:37.151Z · LW(p) · GW(p)
This looks super neat, thank you for sharing. I just did a quick test and can confirm that it is in fact riddled with bugs. If it would help, I can write up a list of what needs fixing.
Replies from: Jozdien↑ comment by Jozdien · 2022-01-24T06:26:08.310Z · LW(p) · GW(p)
That would be helpful if you have the time, thanks!
Replies from: KingSupernova↑ comment by Isaac King (KingSupernova) · 2022-01-24T21:29:10.339Z · LW(p) · GW(p)
Well the biggest problem is that it doesn't seem to work. I tested in a 2-player game where we both locked in an answer, but the game didn't progress to the next round. I waited for the timer to run out, but it still didn't progress to the next round, just stayed at 0:00. Changes in my probability are also not visible to the other players until I lock mine in.
A few more minor issues:
- After locking in a probability, there's no indication in the UI that I've done so. I can even press the "lock" button again and get the same popup, despite the fact that it's already locked. It would be better to have the lock button disappear or grey out, and/or have some other clear visual indicator that it's locked.
- If two people join with the same username, the game seems to think that they're all the same person. They all show up as "you" on the player list, although they are given different answers.
- "Wendy's" shows up as "Wendy's", and same problem for all other words containing apostrophes. (Probably because you're setting element.innerText rather than element.innerHTML, or something like that.)
- If I try to join an invalid room, nothing happens, which is confusing. It would be better to have some sort of error message displayed.
- It's possible to join a room with a blank username by accident. Fixing that is a pain due to:
- If in the waiting room someone quits and rejoins, they'll show up as a third player. It doesn't seem possible to remove a player from the game, you have to cancel and create a whole new game.
- Pressing the "back" button in the browser has some very unintuitive results. I'd expect it to take me back to the homepage, but it seems to either leave me in the same game or put me back into a previous game.
- And not a bug, but it would be nice to be able to customize the timer length.
comment by moridinamael · 2022-01-25T00:33:22.404Z · LW(p) · GW(p)
This is fantastic. We used Critch's calibration game and the Metaculus calibration trainer for our our Practical Decision-Theory course but it's always good to have a very wide variety of exercises and questions.
comment by Jan Christian Refsgaard (jan-christian-refsgaard) · 2022-01-24T17:21:31.375Z · LW(p) · GW(p)
It would be nice if you wrote a short paragraph for each link, "requires download", "questions are from 2011", or you sorted the list somehow :)
comment by Austin Chen (austin-chen) · 2022-01-24T09:34:49.206Z · LW(p) · GW(p)
Metaculus has a calibration tutorial too: https://www.metaculus.com/tutorials/
I've been thinking about adding a calibration exercise to https://manifold.markets as well, so I'm curious: what makes one particular set of calibration exercises more valuable than another? Better UI? Interesting questions? Legible or shareable results?
Replies from: KingSupernova↑ comment by Isaac King (KingSupernova) · 2022-01-24T21:11:14.492Z · LW(p) · GW(p)
Questions about a topic that I don't know about result in me just putting the max entropy distribution on that question, which is fine if it's rare, but leads to unhelpful results if they make up a large proportion of all the questions. Most calibration tests I found pulled from generic trivia categories such as sports, politics, celebrities, science, and geography. I didn't find many that were domain-specific, so that might be a good area to focus on.
Some of them don't tell me what the right answers are at the end, or even which questions I got wrong, which I found unsatisfying. If there's a question that I marked as 95% and got wrong, I'd like to know what it was so that I can look into that topic further.
It's easiest to get people to answer small numbers of questions (<50), but that leads to a lot of noise in the results. A perfectly calibrated human answering 25 questions at 70% confidence could easily get 80% or 60% of them right and show up as miscalibrated. Incorporating statistical techniques to prevent that would be good. (For example, calculate the standard deviation for that number of questions at that confidence level, and only tell the user that they're over/under confident if they fall outside it.) The fifth one in my list above does something neat where they say "Your chance of being well calibrated, relative to the null hypothesis, is X percent". I'm not sure how that's calculated though.
comment by chanamessinger (cmessinger) · 2022-10-24T11:52:25.719Z · LW(p) · GW(p)
Nice! Added these to the wiki on calibration: https://www.lesswrong.com/tag/calibration