AIF test results consistency

My understanding is that if you run the same tree based AI model multiple times you may observe some variation in the results (correct?). Does anyone have an idea about how large this variation can be? Today I re-ran 2 models that last ran early February. I simply copied the originals and re-ran; the data set begin and end dates were identical. The first model, which was made of only the pre-defined cash flow features, showed a decline in the excess return for the Entire Period from 1.55% to -0.41%. The second model, which used only the estimate and revisions features, saw a decline from 1.84% to 0.85%. All models used time series CV and extra trees 2 on SP 1500 data.

If you right click on the model there is a 'add duplicate' option which allows you to create duplicates of that same model. Create a few duplicates and run them to get an idea of the randomness. Run 5 or 6 more if training time is not too costly. Of course, this assumes you have not added random_state in the hyperparameters to remove the randomness. If you did, then remove it.

1 Like