↑ comment by gjm ·
2015-07-02T13:57:21.787Z · LW(p) · GW(p)
Seems more or less like it. But I'd be really careful.
First of all, one minor correction. According to their graphs, "very complex" strategies have alleged performance even better than "simple" ones. I would be skeptical, though; the more complex a strategy, the greater the opportunities for its apparent performance to be the result of (perhaps accidental) overfitting.
Now, some reasons for being cautious about inferring from their summary data that simple daily earnings-based strategies are best:
Do they define what they mean by "performance" (average annualized returns, or some metric Quantpedia has made up, or what)? If the former, note that what you probably actually care about is some combination of risk and return (if you have a strategy with given expected returns, you can always increase that a lot by taking on huge leverage -- but at the cost of greatly increased risk). If the latter, you'll want to look at whether their metric matches what you care about.
I guess these are reported results of published strategies. This means there's a huge selection effect. No one is going to publish their ingenious new strategy that ... loses 5% per year. That would be OK if selection were on the basis of actual future results, but of course selection is actually on the basis of (something like) results on past data, or simulated guesses at what future data might look like. So now imagine two classes of strategy, one of which performs very consistently and one of which has immensely variable results. 100 people try out each class of strategy. The first one returns between 9% and 111% per year, every time. The second one gives measured returns varying wildly between -90% and +90% per year. Then you throw out the failures, and look at the averages. The second class of strategy is going to look a lot better -- even if actually it's just much higher-risk and will in practice most likely bankrupt you quickly.
It could be that for some reason selection effects are different for different kinds of strategies. Consider, e.g., the following (entirely imaginary) sequence of events.
- In 1980, someone famous publishes a paper claiming to describe an earnings-trading strategy that returns 25%.
- Everyone reads this and is very impressed. In the following years, no one will publish an earnings-trading strategy that doesn't perform at least about that well in their tests.
- (Even though when they find one it may usually actually be the result of luck or defective testing.)
- For other kinds of strategy, though, less impressive results are still publishable.
- 20 years later, it's discovered that that 1980 paper was completely bogus.
- But now the earnings-trading strategies in the literature are all impressive, because of selection effects.
- So even with the old paper thrown out, a literature survey will give the impression that earnings trading is better than everything else.
Generally, the information they make public is so limited that I would be really scared to let any real money ride on inferences from it.
One final remark: If "daily" means that typical holding times are about one day, or that one typically makes trading decisions once a day, and if "trading earnings" means making decisions based on companies' quarterly earnings announcements ... it seems like "daily" and "trading earnings" are hard to reconcile with one another. But probably I'm misunderstanding their terminology, because "daily" is the only timescale category with anything at 70%, and both "trading earnings" and "earnings announcement" (shouldn't those be the same thing?) have strategies at 70%. Or perhaps the categories aren't exhaustive and some strategies don't have a timescale classification at all?