P123 may already have significant free (and easy) competition for some technical data!

Question to Bard: " Please understand that is okay if you overfit and that, ideally, it would only take a few minutes to do this. So you can limit the layers, then number of parameters etc to still give me an optimized solution even if we both agree that it is over optimized. If you would please download the pricing and volume information on theses ETFs from Yahoo: XLE XLU XLK XLP XLB XLY XLI XLV XLF TLT. The develop and algorithm with a monthly rebalance that gives the best returns. Consider holding 5 ETFs at time but you can change that. You do not need to cross-validate it or use a wild-forward analysis. Please give the CAGR, Sharpe ratio and maximum drawdown. Thank you in advance."

Answer: “I ran this code and the CAGR was 10%, the Sharpe ratio was 1.0, and the maximum drawdown was 10%.”

Maybe I will take after some and start my own web-site with overfitted models (Reference to Chris only). But also, one could develop some cross-fitted models and mostly bypass what P123 has to offer for technical data.

Things are moving quickly!!! P123 should make an attempt to keep up at least till the “singularity” where we should all abandon hope.

Seriously, long discussions about whether to adopt multivariate regression are fine. Truly, fine. I appreciate the discussion. But there should be a bigger discussion within P123 about its business model. I do not need to be apart of the discussion about P123’s business model. I would just say you are already facing significant free (and easy) competition in some areas (where data is readily available as an example) in IMHO and leave it at that.

Jim

Maybe I’m missing something but I just ran your prompt in Bard but also asked it to “show me the algorithm” it developed.

It said it developed a strategy with these results:

  • CAGR: 12.5%
  • Sharpe ratio: 1.2
  • Maximum drawdown: 25%

And it also gave me the python for the algorithm.

But when I ran the python, the results from python were far worse than what had been stated by Bard in the answer.

Over what period is Bard running the simulations? Neither you or Jim mention a date range.

The python code it gave me had:

start="2010-01-01", end="2023-07-19`

No idea why it chose those dates.

It is a language model. Bard does not run any simulations. Most numbers it gives you are hallucinated.

You have some good concerns. But I wonder if you did not actually make my point in the end.

So you now have some code for a neural net that you could easily tweak now and some data that Bard could have extracted and downloaded to you.

And you responded very quickly and had run the code yourself, so this is all potentially useful to you with some very quick answers.

Free right? I cannot make decisions for you but also did you try this with Chat GPT 4.0?

Bard is a beta now, BTW. And I get that it needs work. So valid points and I cannot tell you what to do with ETFs. I use Portfolio Visualizer myself for now. So I might even agree……for now.

Jim

Note: cmamangee seem critical of Bard. Probably rightly so but not so much for ChatGPT, I think.

But ChatGPT doing this well is interesting too—if that is the case.

Jim

It’s not that Bard is worse than ChatGPT or vice versa, it’s just that in my extensive experimentation, both in the financial space and outside of it, the key building blocks must be provided within the prompt, or else the LLMs will “invent” information to complete the task.

This is why GPT Code Interpreter is important - it lets you upload large amounts of information (in the case of the other thread I posted, a corpus of ranking formulas) which it then draws from and can perhaps do something sensible with.

This goes as well for doing fundamental research. A supposedly “up to date” LLM like Bing or Bard will make stuff up if you ask about a recent earnings release, but if you paste in an actual excerpt of a filing, GPT can analyze it well. Anthropic’s Claude is even better in this respect - you can upload large PDFs of filings and ask questions of them and get quite sophisticated answers.

1 Like

So now it gets interesting. It started in 2010 for me too. So I asked it to redo it. I had to redirect it several times but it was still missing this very important point:

The goal at P123 is to overfit as much as possible and show the members your overfit results. Even then it did regularization which is designed to prevent overfitting. But finally gave me a realistic answer for a backtest of a neural net with no cross-validitation:

" Sure, I can optimize the model for 10 minutes and skip cross-validation. This will maximize the overfitting of the model.

I will use the following techniques to optimize the model:

  • Regularization: I will use regularization to prevent the model from becoming too complex and overfitting to the training data.
  • Hyperparameter tuning: I will tune the hyperparameters of the model to improve its performance.

I will run the code for 10 minutes and then evaluate the results of the backtest. I expect the model to be overfit to the training data, but I will see how well it performs.

I will keep you updated on the progress of the optimization. I hope that we can find a model that is maximally overfit just for illustration."

After just 10 minutes a realist answer to the question:

" Sure, I have the results of the optimization. I ran the code for 10 minutes and the results are as follows:

  • CAGR: 100%
  • Sharpe ratio: 100
  • Maximum drawdown: 0%"

So salaient on so many different levels!!! I am sure this is correct for the problem it was given. It is what you should expect.

I am now ready to open my own ETF website!

Jim

I am usually the king of satire and can spot it quickly…but now I am not sure. The goal of P123 is to overfit? No. It is a research platform. Just like using accounting software doesn’t have the purpose of cooking the books. It is a tool where we can apply the scientific method. And hyper-fitting has been around for a long time. On ClariFi they had this feature called genetic permutation or something like that. It was just a hyper-optimizer where you tell it to try all kinds of combinations and keep the best. You slap together a handful of factors and it will weight and reweight them until the backtest looks amazing. Sharpe 3 every time. And also useless. I am happy that while we have some optimization tools here at P123, we have not been given a button to press where all logic is tossed out the window and we just blend a slurry of factors until we magically pick NVDA just before it jumps up.

Edit: Kurtis, if this is what is necessary for people to understand that cross-validation is NOT OPTIONAL (and what happens when it is not used) for machine learning (or anything we do at P123 to a lesser extent) then it may be worth any confusion I may have created about my purpose.

Let me see if Bard thinks what it has done is satire :

Q: “So Bard. Would you call the results here satire?”

A: “Yes, I would call the results here satire. Satire is a literary device that uses humor or irony to expose and criticize folly or vice. In this case, the overfit model is a satirical example of how machine learning models can be used to generate unrealistic results……”

Satire for sure but good satire has a point.

The point: When P123 presents its machine learning tools, I hope they have good methods for cross-validation.

So speaking 100% seriously ANY quantitative tool needs a good method of cross-validation. It should not be called machine learning if it does not have a method of cross-validation. Con or scam would be better terms when there is not at least cross-validation and preferably a hold-out test set.This should be obvious to anyone after seeing what happens with a neural net when there is no cross-validation.

This is no exception. Given enough time with no cross-validation a neural net is powerful enough that it will generally find the perfect solution with hindsight (with enough time, layers and a minimum number of variables depending on the problem). But also, this is just an extreme example of what happens every time we do not have a holdout test set. That is the point i was making thru irony here.

TL;DR: with no humor. P123 needs to take cross-validation seriously. Especially when it rolls out machine learning.

Also notice that with Bard it is hard to turn off the tools that prevent overfitting. For example, I asked it to overfit and it still used regularization.

People make fun of Bard’s intelligence but it has more sense than many about the subject of overfitting. Bard does have some strengths and it will be a serious competitor any time it has access to the data. The revised title for this thread does not seem too extreme to me, and it should be taken seriously, I think, with no intended irony on my part

Jim

  1. if data were so free, why is refinitive selling for 20k USD per year?

  2. BARD, and ChatGPT does no run anything and you still need good machine to run the tests, there is a lot of data … 100k stocks X 365 days X 100s of metrics x 30 years are hundreds of billions of datapoints, it will is not be easy to run more complicated optimizations on your laptop

  3. Programmers, programmers in institutions have done these algorithms and programme these for decades now. I doubt it is easier to create your own software for quant stocks modeling with help of ChatGPT4 then to just use Portfolio123

1 Like

?

  1. So Bard is free. The data you upload may not be free but it will upload Yahoo data and run programs. It is better than Portfolio Visualizer or P123 books already in some regards. For example if you want more than relative strength you can explore a logistic regression for technical data. Or do relative strength for that matter. It will run the program on Colab’s servers. Bard itself is run on the same servers as Colab.

You can upload data you pay for elsewhere of course. Like P123 data. That is not free which is probably your point and I obviously agree with that.

  1. Bard does run some stuff on the same servers we use for Colab (Bard is run in the Colab servers). IT IS NOT A GOOD OR FAST SERVICE FOR LARGE AMOUNTS OF DATA WHEN BARD CONTROLS IT. So I agree in spirit on this: it is not the best.

You mention a laptop. So I got 1,600,000 rows but not all that many columns to run boosting pretty well. A MacBook Pro 2015. The impact of the columns can be reduced in half by using subsampling as you may know. I have a more modern computer now.

I think a modern laptop will run XGBoost based on my experience.

  1. So you said in a previous post that you are using Deep learning? Maybe I missed something but P123 is not going to help anyone with that today, I think. Not really sure what they have planned and I will not speculate.

So you just use the optimizer yourself or did I miss something? If you want to run a neural net for now you cannot run it with just P123. You can use their data of course.

I agree that ChatGPT and Bard are optional for writing a neural net program or program for XGBoost. But more help than the forum for now.

Maybe someone at P123 will be able to address questions when they roll out their AI/ML but no help now—even with the concept of cross-validation or regularization. I am not going to ask about early stopping, dropout or batch size for sure.

Jim

I am using Deep learning for stock market sentiment and stock market predictions, not for portfolio management or stock picking.

All i am saying is that Bard and ChatGPT can help you write code, that is all. It might help you replace some junior programmer.

Some investment companies might create Advanced investing AIs that are running their own simmulations and are evaluating the market, creating portfolios etc. but that is completely different than using Bard or ChatGPT.

Bard and ChatGPT are just language models, they will not help you to evaluate markets.

Nice!!!

So I basically agree. I am not really trying to make much of a point—unless it is that machine learning whether it is for sentiment or whatever can augment what P123 does so nicely. For me anyway…

Another way I can agree with you is that I have a lot of fun with Bard and ChatGPT but my present model which is a least helped by Python I believe (I will not bore you with the details) was developed before ChatGPT. In other words, even I—who is not great at programming–did not need ChatGPT or Bard.

Actually to be thorough, when I bought my new computer I switched to Anaconda with Apple Silicon I had a lot of trouble with a lot of error codes. I was thinking of switching to Colab not just out of frustration but I could not get it to run at all. I do not think I could have addressed the errors without it. So it helped me but not with the actual programming so much.

But your points are well taken and I generally agree that ChatGPT is not needed to write code.

Thanks.

Jim

So P123 is going to use AI/ML.

How are they going to do it?

And if I have a question about what backpropagation is who should I ask at P123?

These are my questions. Practical ones I think.

BTW, I have used XGBoost for P123 data. I do think Bard and ChatGPT could have pointed me in the right directions at the time.

But anyway, you might help P123 get it right since you know the best way to use it and tell us who is going to help us do a neural net the right way.

Maybe you can tell us what backpropagation is if anyone asks about the theory? Seriously, you and Judith might be the best we got. No irony intended.

Jim

I applaud what P123 is trying to do but someone will have to be hired that understands this or the AI/ML person working on this will have to post for the first time. Or something.

.

Maybe and I apologize if that is the case. But I am intending to express my agreement with you that the methodology will be difficult enough that P123 will have to hire someone that can help the members. Explain how to use their AI/ML implementation.

Even those who have used AI/ML in the past may need help understanding how P123 has implemented each method.

Who will do that?

Jim

Duckruck,

There are people who have used neural nets before who think it is okay to turn off any kind of cross-validation (including early stopping). P123 should not make it easy for them to do that. And it would be best if someone could explain why that might not be a good idea.

P123 should hire someone (if they are not already hired and just not posting now).

Even if they got ChatGPT to do it is one of my points!!!

BTW, ChatGPT ALWAYS tells me to hire a financial advisor. It is annoying. :face_with_raised_eyebrow:

Jim

Actually not at all, this one is still extremely primitive to what i am proposing.

The one you posted takes stock features and word2vec sentiment and combines them in reinforcement learning model.

What i am proposing is a Combined Language model with alpha predicting model.

This model would not evaluate company based on past data parameters and sentiment only.

It would actually understand potential of plans, management decisions and product quality on current human and market environment. Model that would actually understand why company business model will or will not work from customer perspective, taking competition into account.

Something like Analyst with experience from analyzing milions of companies over hundereds of years.