WHICH ANOMALIES ARE LEGIT?

OPU = Opportunities for Portfolio123 Users

Yes, Yes, Yes. 100% Agree with Marc!

To provide an actual response, I believe there are three ways determine whether an anomaly is legit or not. These are summarized by two data-driven approaches – the frequentist’s and Bayesian views – and philosophy.

The data driven approaches are often what we read about in academia. Frequentists will often cite p-values and significance over arbitrarily long time frames. Bayesians will say, “yeah but…” markets factors are dynamic so we use what has worked recently. Both views are equally hazardous. In trying to balance between statistical significance and and pragmatism it is often spuriousness which wins.

The other approach is to have a bedrock of principle regarding the drivers of market value. I believe that many great practitioners has this sort of compass by which to evaluate whether a thing is pointed true or not. Buffett talks about this in Superinvestors. Philosophy does not eschew data. Rather, it embraces it as a means of verifying, refuting, and/or calibrating beliefs. But philosophy trumps data because: a) in an infinite universe, there will be an infinite number of spurious correlations; and, b) correlation does not imply causation. This approach is related to the method of scientific inquiry whereby no amount of data can prove a belief, yet a single data point can rule out impossible beliefs.

I think that Fama-French (2014) made one of the most compelling arguments for justifying its fundamental factors. Borrowing from Modigliani-Miller (1961) theory on earnings, FF apply the time-value-of-money principle to choose its fundamental factors. FF stopped at five factors, I believe, because they are trying to prove that the CAPM cannot possibly capture the essence of efficient pricing (i.e. the CAPM is actually the “CRAP-M”). As academics, their goals are NOT necessarily aligned with practitioners’.

Furthermore, I believe this framework anticipates and describes how many known asset pricing anomalies articulate within a broader framework. Those that do articulate are likely to be robust. In the absence of “statistical significance”, an abridged methodology to evaluate “legit anomalies” may proceed as such:

  1. How does any given anomaly articulate within the concept of fair present value? I.e., how can it be used to estimate the present value of future money flows, or at least gauge the market’s expectations?

  2. How likely will a given factor uncover errors in the expectation? Even if a factor does not directly articulate with a mathematically convenient expression of present value, some anomalies may contain information regarding persistence of estimative errors and/or the presence of non-dispersed information.

  3. How likely is an apparent bargain to turn into a value trap? I.e., what is the likelihood that prices discount better information than I currently possess? Many times, apparently cheap things deserve to cheap. That said, low expectations are easier to surpass than high ones.

Admittedly, this framework utterly fails to anticipate the presence of momentum. There may be other shortcomings as well. However, my goal is NOT to explain to universe, but rather filter out likely distractions and avoid likely pitfalls.