My first impression was disappointment that the thread seemed to be drifting off topic, but on reflection, this robustness question remains very relevant. My short answer is that I don’t have enough of a statistics background to comment on whether it’s true that a reduction in the number of factors increases robustness, but if it is true who cares. Robustness looks to me like it may be the right answer to the wrong question.
I can’t believe I’m going to do this, but I’m actually going to cite the Adviser Perspectives article I rebutted, a part with which I actually agreed and for which I gave the author props:
“All of this rests on the assumption that relationships between factors and rates of return – if they exist – are persistent. That is, that they persist from one time period to another. This assumption is warranted in almost all scientific fields. If a result of a combination of physical forces is convincingly discovered in experiments, it will work in the future too. If a vaccine is developed that prevents a disease for test subjects, it will work even if everybody uses it.”
He goes a but astray in the next paragraph:
“These assumptions are completely unwarranted in the investment field. If an investment strategy that beats the market is discovered and verified – even if it is not a spurious discovery – it will not work for everybody. It cannot; it is tautologically obvious that not everybody can beat the market. Hence, a strategy that is identified as effective, correctly or not, must, at some point, if it becomes popular and widely adopted, stop working and may even reverse to become a bad strategy.”
Yeah, a factor being arbitraged out is something about which we must worry and is why we always need to stay on top of what we do (and a reason why I test with purposive sampling, rather than simply hitting the “Max” link). Frankly, though anybody who loses sleep over the prospect of “everybody” using a strategy needs to get out of the house more often.
What the author missed is that in scientific research, the findings are expected to be applied to the same population as that from which the test samples were drawn. So a vaccine successfully tested on research subjects can be expected to work for the population (subject to discovered and disclosed exceptions).
That’s not us. We can ONLY apply our findings OUTSIDE the population from which we test. The findings we make using the 1/2/99-8/27/15 population and/or any samples drawn therefrom cannot be applied to any part of the population. They can only be applied to the indeterminate population 8/28/15 – Whenever. So who cares how robust the test results are? If the 8/28/15-plus population differs in relevant respects, the model remains useless no matter how many mathematicians and statisticians would praise it.
To jump out of and beyond the research population and into the application-population, we need reasons why we think that will be do-able. We can never be sure (until it’s too late). But we have a big body of theoretical knowledge upon which we can rely to enhance our probabilities, and when we deal, as we must, with the 8/28/15-plus population, probabilities is all we can get.
In our context, minimizing the number of factors can be disastrous.
The single biggest problem is that it leaves us exposed to model mis-specification. This is a huge issue and may be the single biggest reasons why a model might fail even if the future population resembles a historic population against which a robust test was done.
There’s also the nature of what we’re tying to do. Stocks do what they do for a multitude of reasons and there are no brownie points awarded to those who resist the temptation to look at (uncorrelated but similarly potent) factors 2-5 and sticking with robust factor 1 out of statistical principle. Sooner or later, you’ll hurt yourself if you reduce the number of ways you can succeed and eventually, that will make you (and your R2G subscribers) very unhappy.