Better than Random()?

All,

So you join P123 anxious to start a port but decide to be cautious. You want to make sure you have a good model. And besides you are OCD. You alcohol off your keyboard and you start. You are going to run 1000 backtests and take the best one. Besides being OCD you overthink things and you decide on a null-hypothesis:

Null hypothesis: I would have done just as well running the backtest 1,000 times with a random universe.

Alt. hypothesis: all of this is going to pay off. Early retirement here I come.

So I am still working on getting 1000 serious backtests but how well will I have to do to beat random results?

For 15 stocks with Easy To Trade Universe no slippage I think I will bootstrap the results 100,000 times and use a 99.9% confidence interval.rather than eat up P123 resources on 1,000 random sims.

That turns out to be an annualized return of better than 27% just to tie what I would have gotten with random() as a ranking system.

Upload of image of Python code and results (Excel spreadsheet was not uploadable):

Discussion first: No slippage in sim. Should I have included slippage? Obviously I could have used a better equation for annualized return, but I think this one does imply a 28% annualized return (I am not actually OCD and take shortcuts). Any error correction or refinements much appreciated!!!

Jim

1 Like

Its an interesting question, although when I think about it I question the value of comparing your best backtest of 1000 to the best of a 1000 backtests of purely random stock picking. As you run more and more random backtests you inevitably get a better performing outlier.

Think about this. Let’s say you ran an infinite number of backtests with random stock picking. Wouldn’t it be the case that you would arrive at the best random picking of the best 15 performing stocks every week (or whatever your holding duration) thus having a strategy that could never be realized without having prior knowledge (aka Time Machine)? That scenario does exist, but it is 1 out of exponentially larger number of (although not infinite) possible stock picks over a period of time.

I think a better exercise is to randomize within the bounds of your stock picking rules (and maybe bending those rules) to gather a better idea of how your strategy distributes statistically compared to index funds (which are already effectively averaging to some extent).

Jeff

Jeff,

Thank you. I agree and would probably add other ideas to yours.

This did take one minute once coded and another minute for the next download of any other sim results. It is saved in my Jupiter notebook for future reference. So, it would not prevent me or anyone else from using your excellent ideas too!

Hmmmm…… That did not take long; what would Bayes do? :confused: JASP does not take too long to load……

Perhaps, your ideas of minor randomization of larger ideas to shake-out the outliers is a type of regularization? Anyway, I really like it. There are a lot of great ideas in the forum!!!

Best,

Jim

Jim, do you find JASP to be slow with large datasets, say 300K rows?
I suspect doing the same thing purely in python will be oodles faster.

Tony

Tony,

No doubt the Python code is best for bootstrapping. You literally download the P123 data. Make a new column for the log returns and save it as BootstrapRandom in my case. Click “Yes” for replace existing file. Run Jupyter notebook and its done in a couple seconds.

JASP, in my experience, has been fast for bootstrapping daily data but this is not near 300k rows. Keep in mind that my MacBook is from 2015 “2.7 GHz Dual-Core Intel Core i5.”

JASP does run native “Apple Silicon” now and I have had my eye on one of those new MacBook Pros with an M2 (or M2 pro or M2 Max) chip with a few more CPUcores. With good programming multiples cores should help speed up bootstrapping. Not to mention the GPU cores on an M2 which CAN BE ACCESSED IN PYTHON (whohoo!). That and one of those new mid-engine Corvettes and my life will be complete :slight_smile:

JASP will not run non-parametric Bayesian statistics for me (always just shut it down after waiting too long) so it can be slow even for a much smaller number of rows with my computer.

TL;DR: Yes, hard to beat that Python code no matter how you look at it.

Thanks.

Jim

Jim,

I highly recommend the Macbook Pro with the M2 Max chip! I love mine and I’m sure you’ll be happy too. I hooked mine up to a Dell Ultrasharp 38" curved monitor for an amazing experience in the office and I’m going to buy another soon so I can open several dozen more windows and programs (it’s never enough). Plus, your new machine will run plenty fast for you and with Apple quality, it will last for the next decade-plus.

Then you can upgrade to the latest M32-x Super-Pro-Max chipset with the virtual, translucent air-suspension screen and keyboard (all operated from your Apple watch v 51.2, of course).*

*Be sure to check out the optional AI headset that instantly converts thoughts into Python code or P123 algorithms (or anything else for that matter). Available Fall 2030.

Chris

Hi Chris,

My neural-net (hooked into ChatGPT) told me you were going to say that (as long as we are joking about near-future science fiction). HaHa.

Thanks! That chip is here now and it really is cool and I cannot wait. I keep slamming my present MacBook Pro down on my desk, hoping it will give up the ghost. But Apple is reliable as you said. I changed the battery once. That was so traumatic!!! LOL

Jim

Haha, Jim! I’m glad you got the futuristic humor.

For what it’s worth, I just stumbled across a recent news article saying that the M3 chip will be available in the coming 15-inch Macbook Air and (apparently) a renewed Macbook Pro.

The article suggests that these new M3-powered models will start coming out in June, so you might want to be a little gentler with your current Macbook Pro for a while longer… :rofl:

Chris

Chris,

Thank you. Good to know. I had not seen that.

It was always a question of when they would move from the 5-nanometer design to the 3-nanometer chip. Many speculated that they would do it with the M2 Pro and M2 Max chip.

There is a limit to how small they can make the transistors. When will Moore’s law need a new technology to continue? Can it continue?

I will let the science fiction writers (and real engineers) worry about the last. Me, wrapping my MacBook in bubblewrap now :worried:

Jim