What do you use your API credits on? Ultimate subscription

I have a question to you that get 5000 or 10 000 API credits each month through the “portfolio” or “ultimate” subscription https://www.portfolio123.com/app/register/modify

I have been using Danp node google sheet and have also started using Dataminer, so I use some API credits each month.

But I was wondering, even if it’s expensive, to upgrade to “Ultimate”. Especially because of the access to the optimizer, and a better version of rolling tests, I also see that there are 10,000 credits per month.

What do you do with, or do you use the credits each month?

Just a short answer. Anyone doing some types of machine learning and using 150 nodes like you do would have to have an Ultimate membership now just to rebalance their systems. No alternative. And this is a direct answer to your question as to what I would be doing with an Ultimate membership. So not a highjack of this thread to make a separate point,

TL;DR: not optional for machine learners who have the number of nodes you do, now. That does not even include the downloads to train a model!

I do get that sometimes the trained model can be fit back into a ranking system. In fact, I am doing that now myself so I get that. Call your algorithm fundamental analysis (and not gradient descent) if you wish. But you have an algorithm (whatever you chose to call it) that you can fit back into a ranking system for now. Which is nice and has advantages without a doubt.

That does not always work for everyone all of the time. And why would anyone expect it to? That would be a pretty bold assumption without having run a random forest once. Also unless you have tried a non-linear model are you sure you will not want the option in the future—with a different type of PYTHON-requiring fundamental analysis I should say?

FYI - you can also purchase additional API credits on the API page in account settings:
https://www.portfolio123.com/sv/account-settings/dataminer-api

The credits you purchase are only used after you have used up your monthly allotment of credits and they do not expire.

Thank you. Yes, I know. I have bought some extra credits. Im just wondering what other uses there are for these credits besides Datminer and the Node Ranking Google Sheet.

Im not sure, but it seems that a lot of people have their own Python code to create some kind of mass stress testing or optimisation of their own RS systems.

As a direct answer to the question. I would do something like “mass stress testing” myself. I would, perhaps, call it cross-validation or use of a train/test split. Or holdout test set. But I would definitely do that whatever it is called!!!

That is a cool use! It is all kind of integrated, however, if P123 wants to market this. Someone is not going to come to P123 to do a lot of “mass-stress testing” (whatever you might wish to call it) if they cannot do some of the other things they need to do to actually use the systems they develop.

Rebalancing their strategy once it is “mass-stressed,” being a clear example.

But exactly right, IMHO. Maybe I would upgrade to Ultimate AND pay for credits too (with a few simple additions to P123’s already advanced offerings). I have not done that yet, to be clear. I do need to rebalance my strategy once it is developed. That seems important to me for some reason.

With regard to systems that can be put back into the ranking system, I do not see myself ever needing DataMiner for that. Too much works and too much drama when there is an easier way (and I think a more effective way) with machine learning.

Just my answer (with my particular methods in mind) to a very good question.

Do you use your credits today with API and some kind of machine learning?

And what kind of machine learning, would anyone else understand or and could start to use it?

Whyciffe’s,

Yes, I have just now started using DataMiner thanks to Dan’s kind help. I am not a good Python programmer and I needed his help to get started!!!

I can say with 100% factual certitude that you are a better Python programmer than I am .I would rather put my money on that fact than bet that the sun will rise tomorrow.

So, I can say with certainty that you can find something that will at least give you some ideas over at Scikit-Learn. And at least some ideas from Bard and ChatGPT.

Let me start that for you with a (probably poor) understanding of what you appear to be interested in:

So this is a fairly detailed question to ChatGPT with my perception being that you are a good programmer and can sort through some of this on your own:

Q to ChatGPT: “I have some data in a csv file. It has 150 “features” and my target it the “next months returns for the ticker.” The index would be the ticker and the date. Can you give me a program for a ridge regression for the features and the target being the returns. Maybe use 5-fold cross-validation for the hyper-parameters.”

Code:

pip install numpy pandas sklearn

import pandas as pd
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
import numpy as np

Load the dataset

data = pd.read_csv(‘your_data.csv’, index_col=[‘ticker’, ‘date’])

Separate features and target

X = data.drop(‘next_month_return’, axis=1)
y = data[‘next_month_return’]

Split into training and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Create a Ridge regressor object

ridge = Ridge()

Create a dictionary of all values we want to test for n_estimators

params_ridge = {‘alpha’: np.logspace(-4, 4, 20)}

Use gridsearch to test all values for n_estimators

ridge_gs = GridSearchCV(ridge, params_ridge, cv=5)

Fit model to training data

ridge_gs.fit(X_train, y_train)

Save best model

ridge_best = ridge_gs.best_estimator_

Check best alpha value

print("Best alpha: ", ridge_gs.best_params_)

Predict target for the test set

y_pred = ridge_best.predict(X_test)

Check test data accuracy

mse = mean_squared_error(y_test, y_pred)
print("MSE: ", mse)

ChatGPT’s comments on the code:

" Replace 'your_data.csv' with the path to your CSV file. The 'alpha' values in the params_ridge dictionary are the hyperparameters to be tested in the grid search. You can modify these values based on your requirements.

Please note that the grid search can be quite computationally expensive if you have a lot of data or if you are testing a lot of hyperparameters. The computation time will increase exponentially with the number of hyperparameters and the amount of data you have.

Additionally, remember that the effectiveness of your model will highly depend on the quality of your data and the appropriateness of the model for your data. If Ridge Regression does not provide good results, you might need to experiment with other types of models or preprocess your data differently."

Back to jrinne: You do not need me for this and definitely this is the meat-and-potatoes of many of the papers you are linking to. It may seem a little hard at first but definelty easier than wading though those papers. For me anyway.

Hope that help a little with finding what is right for you!

Best,

Jim

Wycliffe’s,

My perception is that you probably have some noise factors mixed with some great factors in your 150 nodes.

If you want to investigate whether every one of those factors are good factors that should be included in your model using Scikit-Learn, I would add this to the above.

I would first do a Lasso Regression (just ask ChatGPT) and please don’t “pimp me” (which is a very friendly term used in medical school about questioning people about their diagnosis during rounds) on Lasso Regression. Ask ChatGPT and decide for yourself whether what I am saying is BS. I would be happy to address any serious questions on this, but if ChatGPT can’t get around to making it understandable after several different ways of phrasing the question on your part (and doing a google search), I doubt that I will be of much help.

But just me (if I were you), I would consider running a Lasso regression first. And use that to get rid of at least a few of your less-well-performing factors in a rigorous manner.

Then do the ridge regression on only the factors you keep after the Lasso Regression. Ask ChatGPT how to do it with a 5-fold cross-validation (and grid search). Or you could probably use the code for Ridge Regression and substitute Lasso Regression for (almost) everything.

I definitely think you have the programming skills to do that if you—after your own investigation—think it may be valuable for you.

Best,

Jim

Whycliffes,

Obviously I have done none of this with your factors. And to be honest it has even a while since I have run a ridge regression with P123 data. So maybe none of this would apply to you, your factors or your models.

But I want to make you aware of this. You can get a weight for the factors after the ridge regression and plug those into a ranking system.

I have been surprised with other models (other than ridge regression) in that sometimes a model with regularization (something ridge regression does) will sometimes backtest better than model without regularization!!! Whoa.

But in theory, your ridge regression model may not backtest as well as fully optimized model. The trade-off is that it would be expected to perform better out-of-sample in a funded port with new (unseen by anyone) data. Which is what you really want, I hope.

Sure, someone can overifit and show a great backtest in the forum and maybe even I might do that (hopefully with an IRONY ALLERT for Kurtis). But most of us, at the end of the day, like to make money in their ports too.

Anyway, accept at least a little decrease in the backtest results as something that would be expected with this is all that I am recommending. If any of this interests you at all, of course.

Edit: Whycliffes,

One last serious question and I will leave you alone to investigate this further or reject the method outright.

But assuming you wanted to test this idea and ChatGPT is able to fill-in the details that I got wrong and explained poorly, I have this question:

Was there some value in this method and is it easier or harder for you to let your computer do the optimizing rather than use your present spreadsheet method?

Whatever your answer, my sincere apologies for not having been as articulate as ChatGPT can be in the past about this topic. And none of this is my original idea so I will not take your answer personally: maybe I will go to Fisher and Pascal’s grave sites to toast what they have done and say, sorry guys wherever you are now, but P123 members have decided what you have accomplished during your lifetimes is BS. Your value has finally been decided by someone who really knows. I will not take your observations personally and just use it myself if I continue to find it useful.

FWIW I believe we live in a rare time with a rare opportunity that P123 can help us realize.

Jim

I just started looking at the python api last month, and I ended up burning all 10k credits on trying to run optimizations on the weighting of the Core Combination and Small Factor Focus ranking systems. I would say I used 10-20% ironing out bugs in my code and then 50% running the same optimization method a few times. I would get a better ranking performance, but not a better Simulated Strategy back test result. I did not figure out why… But I can say that 10k credits is not enough to play around with a more complex optimizing system like Bayesian Optimization using the python scikit-optimize and gp_minimize if you let it run rank_perf for each evaluation. It can burn 1000 credits in one attempt with little back test improvements to show for it. Which is to say I got better Simulated Strategy back test results from my “dumb” change one rank weight at a time method than the gp_minimize method.

So going forward I am planning on thinking up as many “good” factors as I can and then download historical weekly (ranking data for as large of a universe as I can. Then I plan on writing my own code to create ranking systems from that data so I can test more ideas without using credits. I also plan on dipping my toes into machine learning starting with XGBoost.

Here is a breakdown of how you might use credits for download and machine learning: note its 25,000 data points per credit and seems to round up so 25,001 data points is 2 credits!

  • Training (one time cost): 3,000 stock universe, 100 factors, 10 years x 52 weeks per year = 6,240 credits
  • Weekly rebalance: 3,000 stock universe, 100 factors, 4 weeks = 48 credits
  • Daily rebalance: 3,000 stocks, 100 factors, 30 days = 360 credits

Total onetime cost: 6,000-12,000 credits if you do 10-20 years weekly. Multiple by 5 if you want daily data or by the fraction of factors or universe size

Running cost: 50-365 credits, more if you have a larger universe or more factors

Over the next few months I plan to add more time, factors, and universe size. That being said I do not plan on keeping the ultimate membership long term unless I can start getting some really good alpha as its hard to justify the cost otherwise.

-Jonpaul

Wow, thank you for the great reply! I agree. I have been using the rank optimizer just to see how weight can change performance, and I am seeing the same as you. A good RS is not always simple to convert to a good Simulator result.

It could be that you have some input on my own method for trying the best I can to run some of the same testing techniques in the RS performance and the simulator:

In my simulator, I have:
25 stock portfolio
Average holding is 101 days, and turnover is 300%
Universe is USD Canada
Volume is median (91) > 100 000, and Price > 1
10 year test (and use the other 10 years as out of sample test universe)

In my rank, I use:
The same volume and price rule (I use it in the universe)
8 week rebalance
Bucket 70 (so each bucket has 70 stocks)
10-year test
Minimum price 1
Same universe

The main problem is the rebalancing period to try to get some equivalent testing between Rank performance and the simulator. I have set it to 8 weeks, but I am not sure if this is the best one.

What have you done to try to match the testing in the Rank performance test and the simulator?

When it comes to the Rank optimizer, I use a matrix to feed it with numbers from 1 to 50 and 100 rows and 50 columns. And force the matrix to keep at leas 20% as 0.

I think to get the same rank perf and simulation results you would need to match the settings which means same fixed slippage, same rebalance period, universe, and the same sell criteria. This means that the simulation sell criteria needs to be the same as the portfolio size: 25 stocks = sell RankPos > 25. But, if you use variable slippage you will see a lot of performance loss driven from the higher turnover the tight sell rule will create. Instead I look for relative performance between ranking systems with more realistic slippage and sell rules.

Ranking perf vs simulation:

  • Buckets/portfolio size: 20 buckets and 20 stocks. Keeps it simple and I am just comparing relative performance, not absolute performance so its ok the 20 buckets have more stocks than the simulated strategy
  • Universe: similar rules you have, but I split it using evenID for training and validation so I can use the same time period for training and validation
  • Slippage: 0.2% slippage ranking test and variable on strategy. This can contribute to the difference if you have a lot of trading, but with $100k average daily trading I would not expect it to be significant.
  • Sell criteria: ranking perf is at the portfolio “size”. I set my simulation to 2X the portfolio size so rankpos > 40. 2X may not be ideal as it forces the turnover up, but it makes it more comparable to the ranking results.
  • Rebalance/period: same rebalance and period. Weekly and 10 years. This seems like the best way to be consistent between the two
  • Better criteria: I am looking for consistent high return. So high alpha and low negative standard deviation

The interesting thing is that for two ranking systems I am comparing the training sub-universe shows consistent relative returns on the ranking perf and the simulation. However, the validation sub-universe and total universe show opposite ranking perf results and simulation results. Even stranger is that the system that ranked higher on the ranking perf and lower on the simulation has the lower turnover in the simulation which is part of what I expected to drive the difference.

Just look for the best optimized ranking system.

I would set the number of buckets to 50 (or 100)

If you have 3000 stocks in the universe and the sim holds 25, the top 2% represents 60 stocks. I’m guessing the sim sells when the rankpos get close to 50. So the top 2% should represent the model holdings.

Optimize and then select the ranking system with the highest Max value for simulation.

Generally, you don’t want to select just on Max value, but assuming you’re OK with the other characteristics of the bucket curve, it 's an OK place to start.

1 Like