"InterpretML is essentially a high-performance Generalized Additive Model (GAM) that uses Gradient Boosting to learn the non-linear functions for each feature.
For P123 each feature could be part of a GAM with how each feature contributes fully visualized. It is an open source Python Program.
Could someone make this an APP to supplement feature importances (and the upcoming SHAP in AI 2.0) when P123 opens up Apps?
As per Gemini:
EBMs are generally "better" for interpretability because they force the model to be additive (and thus visually clear) upfront, whereas SHAP tries to reverse-engineer additivity out of a complex spaghetti-code model.
But most interesting is that you can add and remove features on the fly without retraining. Because the features are additive, you can just 'turn off' a signal you don't trust and instantly see the new ranking. You would probably still want to run it on the validation set.
It is faster than permutation importance or anything I can think of for feature selection.
The key is before turning off a feature (which would not require retraining) you can inspect the rank return function–which is something like a rank performance test–before you decide whether to turn it off.
It really is not longer a black box and what else would you want for feature selection?
Non-linear. You can turn on or off interactions Not very compute intensive Maybe ideal for everyone at P123 but certainly for new users or anyone that wants to really understand what a feature is doing.
Anyway I will check it out with downloads and may need ot get the API so that I can test features. Maybe I can code this as a P123 App.
I was thinking and maybe an idea is “community funded apps” where someone promises to build one if a certain amount is pledged. p123 holds on scrow and pays if delivered. P123 keeps an administration fee