Is there an optimal function for belief calibration over time?
post by InquilineKea · 2011-04-27T08:46:16.731Z · LW · GW · Legacy · 6 commentsContents
6 comments
You can, say, measure a belief on a scale from -1 to 1, where 0 is correct belief.
Then you could try calibrating the belief. Thing is, that sometimes you can calibrate it by monotonically decreasing it. Or you could monotonically decrease the absolute value of it. Or you could even make the calibration function oscillate between -1 and 1. Sometimes, more incorrect beliefs might even be desirable, since they may give you additional information about the landscape (this is where you can have a case where D1(t) > D2(t) and D1(t+1) < D2(t+1) ).
6 comments
Comments sorted by top scores.
comment by Oscar_Cunningham · 2011-04-27T09:04:35.154Z · LW(p) · GW(p)
Could you explain in more detail please? I have no idea what you're talking about. For instance, how do you measure a belief on a -1 to 1 scale?
Replies from: Emile, wedrifidcomment by NancyLebovitz · 2011-04-27T11:44:19.710Z · LW(p) · GW(p)
I'm guessing that there might be an optimal function for individuals for a while.
Assuming that what you mean is under vs. over confidence, some people will habitually be on one side of the scale, and others will be on the other.
Tracking whether one is habitually over or under confident (which might be different for various sorts of question) could lead to better calibration.