All,
TL;DR: The optimizer—especially with some additional features—might be better than support vector machines and might gain wider support among the present members. Maybe it would help with any transition to other methods they may want to try later (e.g., try XGBoost). It could help by introducing some optional methods that are similar—making XGBoost’s early stopping more understandable, for example.
AND it allows one to use the result in a standard ranking system in a live port like we have all been doing for years!!!
So the optimizer is an awesome tool and many have used it to find good models. There is a method out there that uses a spreadsheet to randomize the weight of factors and loads this back into the optimizer. I will not go into the full algorithm for brevity and because I am not sure if everyone is using it the same.
Suffice it to say this is excellent, has served many people well and is actually a well accepted method in the machine learning literature. I believe this is close to (or the same as) gradient descent.
I would suggest that P123 discuss this with their machine learinng expert. Ask him how crazy this jrinne —who even uses an algorithm for picking restaurants with exploit/explore algorithms and is probably autistic or something—is reasonable in calling this gradient descent.
If your expert says: “Yea, I kind of get it”, then you should ask her: “Are there some basic ML tools that could supplement this gradient discount thing and make it a full, advanced machine learning tool?”
Also ask: "In that regard how hard would it be to make these standard ML methods optional for the optimizer:
-
Early stopping
-
K-fold cross-validation
-
Recursive feature elimination.
-
Bootstrapping or subsampling. Subsampling would be less resource intensive and mimics what is being done with mod() now so it is already widely accepted."
BTW, I prefer the more current term: “Spectrum Disorder.” Joking but what is with me seeing math in everything? Not that it isn’t a useful too for annoying everyone around me.
I think it would work and could be marketed to new machine learners. I think it would help retain some existing members who like this general method if it improved their out-of-sample results. I doubt that it would be more resource intensive than the other ML methods discuss but I am least certain about this.
Jim