PREVIEW: Screenshots of upcoming AI Factors

I have always used excess returns relative to the universe, myself.

Also I have tried classifiers multiple times and it seems to work but never as good a regressors for me.

Jim

Thoughts!
Seems like using a classifier for only returns over 15% would allow the decision tree to focus on the boundary around the outperformers rather than focusing on optimizing the errors across the entire universe of stocks. It seems like XGBoost would also work better if it were only focusing on the top few percent of the universe. But my ā€œseems likeā€ observations are more often wrong than right.

This paper has a lot of thought-provoking material.

First as mentioned they are using classification with the 15 assets having more probability of having >= 0.15 return rather than regression.

Second XG-Boost improved performance but with significantly more volatility.

Third Figure 8,9 and 10 Shows the variation of individual Feature importance over time. Their sample time and rebalance as they call it in this case is 4 months long. They state that 44 classifiers are trained (one on each rebalancing date). The feature importance for each of the 20 features are graphed over time.

What stands out is that the momentum factors are almost mirror images of financial fundamentals like (roe, Price/Book, earnings/price, price/sales . . ). Iā€™ve always been aware that the market environment changes over time but this shows more volatility of factor effectiveness than I would have expected.

1 Like

For the nth time. Use alpha!

If you're choosing stocks to buy calls or puts on, then total return makes some sense. They expire worthless if you don't get a certain total return.

Options are priced based on volatility. There is no delta that is valued for purchase or sale in that security. No need to take my word for it; just google it or study an options pricing model.

I would strongly advise anyone here not to attempt to gain insights on mispriced options using the P123 platform as it has no mechanism to study volatility in a meaningful way.

Of course you're right: options are priced according to volatility. But an underpriced put option on a stock with excellent prospects is not going to make you as much money as an underpriced put option on a stock with terrible prospects. One way to look at a stock's prospects is to use Portfolio123 to examine its fundamentals. Creating a system for doing so with AI has some promise. I pay attention to volatility and options pricing, and I don't use Portfolio123 for that. But because I choose my puts carefully according to ranking systems I've built and backtested with Portfolio123, I have a realized return of 35% on the puts I've bought in the last two-and-a-half years. Considering my weighted average holding period is 133 days, that's more than 140% annualized. I don't think what I'm doing is necessarily a good way to make money--this kind of strategy is far too volatile. But it's an excellent way to hedge a long-only portfolio.

2 Likes

Just a thought but classification might be well-suited for options.. Classification (like in the paper) does work it seems . In your case class 1: I get a net return. Class 0: expires without a total return.

I have not tried this partIcular use of a classifier..

From the thousands of model machine learning tests I have made, classifier almost always performs worse than regressor on out of sample data. Same for total return. Relative performance is needed so that the target is adjusted based on market performance.

Since the model is getting stock based inputs (fundamental/technical related to the stock), it makes sense to provide it with a target that is not based on market fluctuations, thus the relative performance.

1 Like

Working on it! So many distractions :frowning:

**While I'm starting to grasp (or trying to) the concepts of machine learning, training methods, and evaluation metrics, I have a question that might seem basic.

What exactly is being tested?

Are we evaluating individual nodes, like comparing "close(0)/close(120)" and "close(0)/close(140)"? Or is it the combined effect of node weights within the entire ranking system?

Additionally, are these tests like rolling tests or simulation scenarios?

Like the top ranked P123 extra treesIII, is that a test of a simulation, a ranking system, or a full simulation?

You could think of an AI model as a way to set the weights and direction for the nodes of a ranking system. But it also allows more complex, non-linear relationships. For example a low PE is good, but at a certain level like 5, it's not good.

The main goal of the AI tools we're creating is to find the best AI model for the features (factors) and target you chose. A variety of reports are generated to compare accuracy, performance and generalization (how well it does out of sample)

The model "P123 extratrees III" uses the ExtraTreesRegressor with certain parameter values. In the example it was the best model in a Cross Validation (CV) test.

A CV test trains a model for a certain period (training period) then uses the model with unseen data (holdout period) to make predictions. These predictions are used to create quantile portfolios. Multiple CV tests are done and the holdout results are concatenated to create a similar output that you get from the ranking system performance tool.

The key advantage over the current ranking systems is that the process is built to do the analysis the correct way: make changes with training data and test using unseen data. To do the same with our current tools is possible, but extremely time consuming.

Hope this helps a bit.

6 Likes

Marco,

Nice explanation.

"P123 extratrees III" has pre-set hyper parameters that you have found to work well on your testing and there is an "extratreesII" and "extratreesI" (maybe 'extratreesIV") offering a range of hyper parameters, perhaps? That will make it easy for most users, if so. Also, Extra Trees regressors have been pretty robust over a wide range of hyper parameters for me (a good thing suggesting a few options is more than enough).

Kind of like cross-validation of the hyper parameters across a number if beta-users with their own favorite factors (again, a good thing suggesting we don't need full access to all of the hyper parameters to start with). Saving users from having to optimized the hyper parameters as this has already been cross-validated by P123.

So maybe, literally, members just have to provide their favorite factors and they are suddenly doing advanced machine leaning thanks to P123? VERY nice, and I think this seems likely (from what I understand) without any exaggeration.

Doing it with cross-validation provided by P123. You don t see that every day (rare or maybe unique to P123 as far as retail investor options).

You link to Sklearn's Extra Trees Regressor above. Are you using Python or C++ or something else? Not sure that matters but I would be interested.

Jim

Right.

That was the idea. Although calling it "advanced ML" is perhaps a stretch.

Not knowing much about ML when we started helped in figuring which areas would be challenging for others :slight_smile:

The ML backend is Python.

1 Like

So people already have access to complete and detailed documentation for much of P123's AI/ML release (e.g., your Extra Trees link).

Documentation to the Python version of XGBoost here: XGBoost Documentation

The much of the rest presumably at Sklearn. Eg, Ridge Regression

XGBoost community with moderators who are maintaining the open source program here: XGBoost Example of a lame question that should probably be deleted from the site here (i.e., about base_score and the i.i.d. assumption): CV Shuffling improves performance *a lot* - XGBoost. But they were kind enough to answer it.

In any case, there are already in-depth resources and information available for much of the AI/ML about to be released.

Jim

Marco:
After you have your AI/ML ā€œAI Factorsā€ available will we still be able to Download Factors for running our own ML algorithms? On the one hand I like the idea of P123 having a well-developed ML easy to use system. And in all likelihood I would use it rather than trying to run my own. But like many others I can conceive of some strategies that I would like to experiment with that are unlikely to be in a generic P123 system.

Of course