Systems Performance vs. Real-World Performance

Hi Brian - thanks for sharing your thoughts and methods. I also know it’s so important for me to take the long-view. It’s so easy to look at a 15-year track record of a strategy such as O’Neill’s CANSLIM and be amazed by the returns. But I quickly forget that every strategy is going to have periods of underperformance. The true skill lies in staying the course! This year has been a tough one in that many, many strategies are having a tough year. I’m trying to remind myself frequently that the fundamentals of the stocks I own are very solid. Value will have its day in the sun again!

  • Mike

Yes, I have found that using Frank for mostly fundamental variables has worked well for me. I first tested each variable to see that it performs by itself and then select ranges for each variable. I only use variables that perform well on their own but then test to see that it performs with the other variables I’m using. The ranking method doesn’t affect my results much so it doesn’t matter which ranking system I use.

that’s great to hear. What kind of fundamental factors show the best returns. also, what is your turnover like? I find that most of these models are not investable due to high turn over

If you’re using P123’s Trade module, I would focus more on Avg Return (Realized) and less on portfolio turnover. Personally, I prefer to see that metric be greater than 5%. Unfortunately, P123 doesn’t provide that stat for DMs, so you’ll have to ask the designers.


Agree 100%. In fact, one should aim for 5-10% return per trade, high winner rate and low turnover.

I believe there are many factors which drive a wedge between simulated versus real-world strategy performance. I also believe that it is unwise to look at these factors in isolation or only a few at a time.

Pure variance between actual and simulation is net-of-net neutral–it makes no distinctive between doing better or doing worse than one’s expectation. However, these factors tend to work against us rather than for us.

Here are some of some factors which tend to work against us:

  1. Alpha-decay (i.e., information diffusion)
  2. Over-fitting risk
  3. Underestimating slippage and impact costs
  4. Underestimating fill risk (i.e., overestimating liquidity)
  5. Cognitive biases and errors, including our own and those of our investors
  6. Style-drift (i.e., investors’ tendency to tilt towards popular motifs right as they are about to drift out of style)

Again, I think you have to look at all the factors when estimating how much simulated performance will tend to overestimate real-world performance going forward.

And bias too (sometimes). Overfitting is variance. But bias can be as important. Yet it rarely discussed.

5 stock models may not have a lot of bias as the model is most like a k-nearest-neighbor model in machine learning. There is no assumption about the distribution for k-nearest-neighbors. But larger models assume that the ranking system is linear and that the factor ranks–as the predictor variables–functions on an interval basis: usually not the case.

K-n-n may not create a lot of bias while larger models almost certainly will. But finding a good k-n-n neighborhood (without a very powerful computer) is pure trial and error. Hit-or-miss with no rhyme or reason. And it certainly can be overfit.