Seminal paper cementing AI to the top spot for developing quantitative strategies

@Marco,

That is a lot of methods. Impressive and exciting.

I would be interested in what @pitmaster and others have to say.

But I am absolutely sure I am not the only one who will want to consider “model averaging”. At the extreme running all of those models and averaging or even stacking the results (probably sorted predicted returns to make a rank).

“Stacking” would be a little hard but model averaging would be trivial. We could probably do it without P123’s help by making each model a node and having the ranking system average the ranks. Probably, it could be done without it being an explicit feature although it could, potentially, be marketed.

But with or out without P123 needing to address it, looking at model averaging (or even stacking) would be an obvious thing to do with such a rich supply of models.

Addendum: So a think a TL;DR is pretty good and leave it to members to go to ChatGPT it they want more information about stacking. Probably for Jupyter Notebooks at home to start.

TL:DR: When stacking, the predictions of individual models (e.g., regression, random forest, XGBoost….etc) are used as features for the final model.

Jim aka “The grocer” because I stack and bag (Bootstrap Aggregate) so much.