What would you do to reduce the turnover rate of the AI factor?

I see that several of the good systems shared in relation to AI-factor have a very high turnover rate.

I do not want a turnover of more than 350% in the USA, Canada, or the EU because of the cost. To accommodate this, I have simply set a target: (FutureRel%Chg_D(15, #Bench) + FutureRel%Chg_D(30, #Bench) / 2 + FutureRel%Chg_D(63, #Bench) / 4.2 ) / 3

However, with this target, I still get a very high turnover rate, which makes the results unrealistic when I test it in the simulator or screen backtest, especially when you factor in all costs associated with trading in the US or EU.

I have tried adjusting the target by extending (double them) the periods to reduce turnover, but that resulted in significantly worse outcomes.

Here are my EU results with Rank tolerance set to 7, so that I do not get a higher turnover than an average of 8%:

Here are my US results with the same adjustment:

It turned into a longer post than I intended, but does anyone have any good suggestions on how I can reduce the turnover rate without significantly affecting the returns?

**Or do you use a solution that incorporates costs with all trades, so that the AI factor penalizes models with too high turnover?

Have you tried using an AI predictor in a simulation instead? I get hight turnover with AI factors due to the fact that the stocks are going in and out of the universe, with the AI predictor you can force your holdings in to the universe.

And lookback period of 15 days sound very very short unless you intend to only hold your stocks for a few weeks.

Any reason why you divide this return by 4.2 ?

If your target is based on 15/30 days window your model will mostly depend on on momentum or/and changes in sentiment factors and to some extend PYQ financial factors. That is why turnover is high and probably validated return is far from realistic.

I would say that there is no free lunch ... bunch of short-period momentum factors, PYQ vs TTM ... shorter target vs longer most often gives higher CARG but with higher turnover and lower probability of materialising validated performance.

Are the ranks more volatile for some AI models than for other models (including many P123 classic models) even if the returns are stable enough to use the model?

I wonder what a standard deviation of ranks in the rank performance test would show. If we had that would we end up using factors in the ranking system that stabilize the volatility of ranks with the goal of reducing turnover? Would you be able to reduce volatility while keeping the same returns and get immediate feedback in the rank performance test?

For now I just change the RankPos sell rule using a larger drop in rank to trigger a sell. Seems to require a pretty large change, in my experience, to keep the same results and turnover for some models. Requires a change even when I am using the same factors (with different models). So I think some models have more volatility than others.

With Python you could probably try exponential smoothing of the ranks for volatile models.