Do AI model factor weights vary through time?

Do AI model factor weights vary through time? Is it the case for some models and not others? If they do vary, it would be helpful to visualize the time series of the weights.

A predictor establishes weights for each feature when it's trained, and the weights do not change. Only retraining changes weights.

N.B. some algorithms also have a random component, so weight could change just by retraining using the exact same period.

I understand that we will receive information about the feature weights in the trained ML model sometime in the future, do you know approximately when that will be?

That information would significantly enhance understanding of what the ML model does, and also help the decision to use ML instead of the typical ranking system, as making investment decisions in such a black-box universe like now is not a good solution.


This is not as straightforward as with ranking systems. It falls under feature engineering and it can mean many different things. It could be just be a regression between features and targets, it could be model native scores (calculated during the fitting), or could be an impact study that can be done with any model.

It can get all messy too if, for example, you use the feature importance scores from a tree model to optimize your features, but then end up using a NN model.

Below is a (partial) list of the many techniques. We are still trying to figure out what we should do and how to present it in a simple, cohesive manner.

Let us know if incorrect and what's missing. Thanks

Identifying important features

No model required

Correlation with label, several different ones.

Model Native

Regression coefficients
Tree based importance

Any Model

Permutation Importance (Feature Impact?)
Feature Ablation
SHAP