PREVIEW: Screenshots of upcoming AI Factors

Thoughts!
Seems like using a classifier for only returns over 15% would allow the decision tree to focus on the boundary around the outperformers rather than focusing on optimizing the errors across the entire universe of stocks. It seems like XGBoost would also work better if it were only focusing on the top few percent of the universe. But my “seems like” observations are more often wrong than right.

This paper has a lot of thought-provoking material.

First as mentioned they are using classification with the 15 assets having more probability of having >= 0.15 return rather than regression.

Second XG-Boost improved performance but with significantly more volatility.

Third Figure 8,9 and 10 Shows the variation of individual Feature importance over time. Their sample time and rebalance as they call it in this case is 4 months long. They state that 44 classifiers are trained (one on each rebalancing date). The feature importance for each of the 20 features are graphed over time.

What stands out is that the momentum factors are almost mirror images of financial fundamentals like (roe, Price/Book, earnings/price, price/sales . . ). I’ve always been aware that the market environment changes over time but this shows more volatility of factor effectiveness than I would have expected.

1 Like