I have noticed when using the AI Factor system that the models tend to be extremely high turnover. It seems like there needs to be some way to include a slippage factor in the model. Especially for small cap models the amount of turnover is not realistic.
Are there other ways to handle this issue that I am unaware of?
Thanks,
Daniel
Turnover is something I pay close attention to when working with AI models. What I’ve learned is not to focus too much on the model or algorithm with the highest return. Instead, place much more emphasis on turnover—especially when working with micro-cap models.
Avoid setting a target that’s too short, like 1MRel; try 3M instead. Even though the top bucket may show worse results with a 3M target compared to 1M, slippage will kill the model in backtesting with the shorter horizon.
Remove all features with short windows.
Remove all features with a high percentage of missing values (NAs).
Eliminate highly correlated features. For example, if you load the Core: Momentum factors, you may get both Close(0)/Close(160) and Close(0)/Close(180), which typically just adds unnecessary turnover to your model becuse of the high correlation.
When choosing hyperparameters, prioritize those that reduce turnover rather than those that boost bucket results. For instance, in LightGBM, it's tempting to use a low min_child_samples and a high num_leaves, but the increased turnover often outweighs the improvement in bucket performance.
3 Likes
AlgoMan, thanks for the response it is much appreciated. I do contend that being able to model slippage explicitly would have significant value as well.
1 Like