post by Eliezer Yudkowsky (Eliezer_Yudkowsky)
Wrong Tomorrow by Maciej Cegłowski is a very simple site for listing pundit predictions and tracking them [FAQ]. It doesn't come with prices and active betting... but a simple registry of this kind can scale much faster than a market, and right now we're in a situation where no one is bothering to track pundit predictions or report on pundit track records. Predictions are produced as simple entertainment or as simple political theater, without the slightest fear of accountability.
This site is missing some features, but it looks to me like a starting attempt at what's needed - a Wikipedia-like, user-contributed, low-barrier-to-entry database of all pundit predictions, past and present.
Comments sorted by top scores.
comment by Maciej_Ceglowski ·
2009-04-02T08:30:50.000Z · LW(p) · GW(p)
I run the site and would welcome feature ideas, either here or by email to firstname.lastname@example.org. I'm a long-time fan of this blog, so I'd love to hear from OB readers.
comment by Robin_Hanson2 ·
2009-04-02T15:39:28.000Z · LW(p) · GW(p)
The site seems to be promising to later evaluate a rather large number of widely ranging predictions. If it manages to actually keep this commitment, it will make an important contribution. The five year limit on prediction horizons is unfortunate, but of course site authors have every right to limit their effort commitment. I do suggest that they post the date that each prediction was submitted, along with the date it was made, to help observers correct for selection effects.
comment by Rasmus_Faber2 ·
2009-04-02T09:01:53.000Z · LW(p) · GW(p)
It is a very nice idea, but I do not like the fact that you are able to submit predictions that were made too long ago - not to mention predictions that have already been proved or disproved.
It may encourage unnecessary bias in which predictions are submitted.
comment by Jason6 ·
2009-04-02T19:38:55.000Z · LW(p) · GW(p)
I'd like to see citations as well - indicating where the predictions were made and also supporting evidence for any adjudication.
comment by a_soulless_automaton ·
2009-04-02T19:27:22.000Z · LW(p) · GW(p)
I agree, it would be interesting to have more results than just right vs. wrong.
Exempli gratia, with Scoble, one prediction was arguably half-right (minus the stylus pen, the iPhone essentially qualifies), one that was correct at a later date (as Joe said), one that is marked correct already (RSS becoming mainstream), and one that is simply wrong (re: friendfeed).
I also agree that selection bias could skew results badly, but the idea overall is excellent.
Also, props for the disclaimer on the page! "Past performance is no guarantee of future results." Almost hofstadterian levels of indirect self-reference.
comment by Joe7 ·
2009-04-02T17:48:42.000Z · LW(p) · GW(p)
Taking a look at the site listing for Robert Scoble's prediction: "Three things will join in 2005: cell phones. Hard drives. Skype." A couple suggestions:
Since you can now put Skype on the iphone http://www.skype.com/download/skype/iphone/, his trifecta is complete. He was only wrong by the date. There should be some indication of why the prediction is wrong, which parts are wrong and just how wrong it is.
Roble's commented on this set of predictions, at the time, as being "...a bit tongue-in-cheek." (see http://radio.weblogs.com/0001011/2004/12/23.html#a8987). So there ought to be some indication of just how serious the prediction is. Otherwise, we can post something like Conan O'Brien's year 2000 predictions and there would be no indication that it was meant as a joke.
At the very least I'd like to see some kind of annotation.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) ·
2009-04-02T16:08:38.000Z · LW(p) · GW(p)
The main thing missing, I think, are the features to let all this be done by community practice and less by the site authors. Wikipedia practices like NPOV and voting subject to oversight by admins, and something like a Wikiable paragraph for each prediction entered. And voting on the most interesting predictions (so that the site can have a most interesting prediction of the day up front). If I were going into this full throttle, those are the preliminary features I'd be looking for.
It actually seems like something that ought to be part of Wikipedia, but unfortunately Wikipedia seems to have become the de facto standard while maintaining a very closed practice that limits how much information it can accumulate and whether it can have special interfaces for particularly regular sorts of facts.
comment by nazgulnarsil3 ·
2009-04-02T10:01:10.000Z · LW(p) · GW(p)
the likely result is that pundits would start taking more care to make their predictions untestable.
this is already the norm
1) make qualitative prediction
2) reject criticism with "no true scotsman" fallacy (x wasn't really an example of y because z)
comment by Rasmus_Faber2 ·
2009-04-02T09:51:26.000Z · LW(p) · GW(p)
"It" refers to the fact that you can submit old predictions.
There is no doubt that a site like this will have some bias in which predictions are submitted. I do not doubt that it will be mostly extreme predictions that will be submitted.
But the fact that you can submit predictions, when you have considerably more knowledge (or even complete certainty) about the outcome than the predictor had when he made the prediction, seems to be a completely unnecessary source of bias.
comment by g ·
2009-04-02T09:30:11.000Z · LW(p) · GW(p)
I'm not sure whether "it" in Rasmus's second paragraph is referring specifically to the fact that you can submit old predictions, or to the idea of the site as a whole; but the possibility -- nay, the certainty -- of considerable selection bias makes this (to me) not at all like a database of all pundit predictions, but more another form of entertainment.
Don't misunderstand me; I think it's an excellent form of entertainment, and entertainment with an important serious side. But even if someone is represented by a dozen predictions on Wrong Tomorrow, all of them (correctly) marked WRONG, that could just mean that it's only the wackiest 1% of their predictions that have been submitted. Which would show that they're far from infallible, but that's hardly news.
Quite possibly this is the best one can do without a large paid staff (which introduces troubles aplenty of its own); it's just not feasible to track every single testable prediction made by any pundit, and if that started being done and noticed the likely result is that pundits would start taking more care to make their predictions untestable.