Preview of our v1.1 of AI Factor: intelligent features, grid search, categorical & macro

This is especially important for those of us who happen to already know what features have been working for the last few years. I.e., all of us.

If you can incorporate some sort of feature selection into this one can, in theory, begin to get a true idea of how a method of feature selection might perform out-of-simple.

That does not really work now. If we have selected factors that we know have worked over the last 10 years I am not convinced it matters what ML model (or P123 classic optimization method) we use.

Yuval had made this point about one of my Python Models that had fixed features in a post. He was right. I cannot speak to what Yuval was thinking exactly but for sure my results were suspect. I agree with him on that and I did not mind him pointing out the obvious.

Yuval makes this point here. I hope he does not mind me quoting it considering I think it is a great point and I think the context is the same:

Ii think Yuval was saying that the way I did it my results were not really out-of-sample in the least. That I was wrong in saying they were. Yuval is absolutely correct about that t I believe.

JLittleton has a fix for that when this feature is available with k-fold cross-validation. P123 would just have to add the walk-forward test set and we can have a solution to what Yuval too is suggesting is a weakness with what we are doing now.