Use AI predictor in ranking system

It works now so please disregard this post

If this ranking system is being used in a backtest it will not work tomorrow. Prediction will only be allowed at the most for 1 year of history. There will be a much more efficient way to do backtests with predictions.

More details tomorrow

When will we be able to backtest again?

1 Like

You can do it now. You have to re-run the validation then the icon becomes enabled that: 1) copies the formula to use in backtesting 2) brings up a popup with more info and quick links to run screen backtests. You do not need to use the entire period of the dataset of course.

Notice that we created a new formula that only accesses validation prediction. There's also several other enhancement which will be announced soon.

4 Likes

That's exciting :slight_smile:
Would hav been handy to have a re-run button next to the validated model.

There is one more column in the Results -"Turn%", what does that mean?

Is it possible to somehow "Force Positions into Universe" in a strategy with the validated prediction?

What determines the window available for the backtesting? Noticed the following behavior on the first couple of tests I tried:

  • First model with a validation period of 2004-06-08 -> 2024-06-08, basic holdout with a gap of 52 weeks, and a holdout period of 2 years. Backtest period available was 6/11/2022 - 6/8/2024 which is kind of what I was expecting.
  • Second model with a validation period of 2004-06-08 -> 2024-06-08, k-fold validation with 7 folds, 2.9 year rolling holdout period. Here the backtest period was 6/12/2004 - 6/8/2024. Wasn't expecting this one.

Is the available backtest period the amalgamation of all the holdout periods?

Correct. It strings together all the orange bits in the Validation->Method chart , the "Validation - Holdouts"

Agree

Annualized Turnover

No

Not being able to force a position into the universe and trade in a micro universe is a huge disadvantage. The results I get from the AIFactorValidation compared to the Predictor Rankings are very different. Many of my good buys get sold when they gain value because they get thrown out of the universe.

For example, in a strategy backtest with Predictor Rankings, I get 64 stocks with a realized gain of more than 100%, compared to only 34 with the AIFactorValidation test.

I would really appreciate it if you could open up the possibility to use the Predictor in the ranking system for a period longer than one year again.

1 Like

What is the difference of their CAGR?

It goes from just above 200% turnover to almost 300% turnover. CAGR drops like 20% or so... and the sell rules just does not apply any more since the best holdings gets thrown out before the sell rules would sell the companies.

I guess the use case for AIFactorValidation in backtesting strategies is only to apply buy rules and market filters. Other than that, you could just as well use the screener.

Can I ask what is the original CAGR?

about 77%

Maybe you could tweak the universe a bit to make this work? E.g. if your normal mktcap restriction is <500 you could do something like OR FHist("MktCap",12)<500 in the universe rules.

I know it wouldn't be perfect, but probably worth testing.

I agree with this. I understand there is compute associated with predicting live data, but at least 5 years back would be helpful. 1 yr is not informative. It personally would not give me confidence if I can only test it for 1 year.

It's not necessary. Just make a copy of your AI Factor , validate a model using "Basic Holdout" with saving of predictions enabled. It's exactly the same thing as using a Predictor.

Allowing predictions in a backtest will cause us problems. Someone will backtest a ranking system with 10 AI factors for example, bring the prediction server to its knees, and affect other users trying to do normal predictions.

Once we are happy with a backtest / AI factor, how are we using it in a strategy?

Hi @marco

How can we test for targets that go beyond 13 weeks? For instance, the AI factor supports testing for things like "12MTotRet." How do we create an AI model that predicts that kind of forecast horizon and then test it in a screener or simulation, both long and short?

Thank you!