Factor reversal

I’ve seen phenomena similar to this Factor Reversal issue in other contexts as well.

Those who work with the capital asset pricing model are often confronted by the instability of beta and risk premium; the problem isn’t solved through arbitrage pricing theory since most other factors that can be used also tend to be unstable. I’ve also seen trouble in this regard with efficient-frontier portfolio theory; even if you can come up with decent expected-return inputs, there’s a ton of instability in standard deviation and in correlations.

There is a branch of research relating to a topic known as “regime switching” wherein they try to objectively model the usefulness of different factors at different points in time. I haven’t seen any great results yet from this, but it sounds like it might be promising if the researchers can successfully avoid the data mining trap (a problem that seemed rampant during the time I was exposed to heavy quants).

Don,

The reason that the top bucket had a value of 0.0 is because the distribution has a (or a few) outliner stocks that have a value of price to sales significantly lower than the pack. This causes the distribution to be distorted leaving no stocks in the top bucket. If you re-run the performance of your price to sales ranking system using 200 buckets you will see one bucket with a +8% annual gain in the 97.5 rank level. All the other rank values above a rank of 94 are 0.0. See below.

The problem is caused because you didn’t use a liquidity filter on the stocks. If you use a liquidity filter as low as AvgDailyTot(60) > 10000 the problem goes away. So the problem is caused by a few very small companies. This is one of the reasons that I try to always use a liquidity filter. See below.

Denny :sunglasses:


No ADT Filter.png


ADT greater than 10K.png

As Denny mentioned this problem is generally due to outliers. Instead of using the factor on its own I tend to use something like the following in my ranking systems:

eval(Pr2SalesQ > 0.2, Pr2SalesQ, NA)

A low P/S ratio is good, but an extremely low value usually indicates that there might be trouble.

  • Erik P

I too have noticed that much of what we rely on here at Portfolio123 falls apart after July 2007 and/or October 2007.

Last year there was talk of the managed funds resorting to selling indiscriminately as the markets kept falling, so even the stocks of the most fundamentally strong companies fell, as all those funds with similar factor strategies rushed for the exit. Maybe that is what we are now seeing in our data.

The question now is whether behavior will revert to “normal” (whatever “normal” means) in the near to medium-term future. Or have we entered into a completely new landscape that will require new techniques, strategies, tools, and ways of thinking? Where would that leave us with Portfolio123?

I think one has to be very careful about a lot of the implicit assumptions made when running backtests etc.

I have noticed, for example, that a lot of technical relationships are unstable and break down after a period of time, and/or even reverse (causing crippling losses for anyone relying on them for trading).

I have also noticed that some successful strategies can go through several months of underperformance and then pick up again. This raises an interesting question, if you have an underperforming portfolio should you dump it or keep it? There are arguements to be made for doing both.

This does sound very interesting. I know that the Standard and Poors Backtester does include ranking systems (a la P123) that can have dynamic weighting - the weights change over time. Unfortunately this product is about $30k/year subscription, way out of my budget.

I am feeling that technical factors are the most vulnerable. I have seen the “mean reversion” trade change in recent months from a shorter to a medium term timescale. If you chart close(0)/sma(50) you see it shows a mean reversion trade, but in prior years it really did very little.

Fundamental factors, such as valuation, are more interesting. I have talked numerous times about the effects of “valuation compression”, whereby the gap between the high valuation and low valuation has narrowed, something that cannot continue indefinately, and therefore due for a correction. Perhaps it would be nice if P123 could display a chart showing the progress of the average P/E ratio etc. for each of the five quintiles in the market, then you could visually see compression/expansion in these ratios.

Another comment is that Warren Buffett, the worlds richest man, has achieved a long term rate of return around 24%. In light of this, it is perhaps important to ask the question, are these 50%+ rates of return seen in backtests realistic, or a mirage. Perhaps emphasis should be put on robustness rather than performance. I know it isn’t as sexy, but it may be more likely to work.

Hi Olikea,

thanks for your comments. You are making valid points.

“Style drift” is a problem and it will never disappear, it is part of the markets. it is the notorious “pig cycle”. Wheenver something is working great, lots of people jump on the bandwagon. This is the nail in the coffin of that particular style (or indicator). After a while (short or long) the results start deteriorating / falling off the cliff and people give up or modify their style. Sure enough, it will start working again at some point in time. Unfortunately the time cycles of these changes cannot be predicted. It is and always will be part of the markets.

I really support your view on “robustness”. It is easy to create “incredible” results with P123 only to see them fall apart when applied to the markets under reality conditions.

Robustness is often talked about but often the lure and the seduction of high returns take over. Whenever I have the choice of GREAT but erratic returns and smaller returns but a ROBUST (smooth) equity curve, I always try to take the latter. In the end I was always glad I did.

Wern

Hi:
I totally agree Robustness is No. 1 improtance. I had few port with great back test, but total failure in real world. I would (so will lot other mumbers) appreciate any one sharing their robustness test method.

Hongwei Hui

Hey Guys,

The above comments got me thinking. One of my favorite new feature that I would like to see implemented is the ability to provide the number of stocks that fulfill at a certain condition. This could look like #stocks(Prc2SalesIncDebt<1,0), which would give the #stocks fulfilling the condition at the open of bar 0.

I used the screener and downloaded the results in excel and plotted the # of stocks. What I found is that at the last rebalance of 7/14/08, there were 1152 stocks below 1. In the history of the database this is a new high. This means, in my view, that we are entering a strong buying point where value can be bought. It may be that the number of stocks continues to go up for a few more weeks/ months but we know that it will go down eventually. The lowest number of stocks was 696 and the average 942 number of stocks.
What I take out from this insight is that we need to watch the expansion / contraction of the number of stocks of our top 30 factors and than make a judgement call if they are overbought or offer great value like now.


nr stocks.bmp (1.65 MB)

Thanks for this, this is a fascinating graph and exactly the sort of thing that I was talking about. However, there are a couple of “gotchas” with this approach:

  1. The number of stocks in the universe tends to increase over time, therefore, the absolute number fulfilling a certain condition is also likely to increase too, all other things being equal.

  2. Asking how many stocks below an absolute valuation ratio doesn’t answer the question of the dispersion in valuations: It may be that the dispersion between high and low PSR has narrowed recently. I think most people would agree the market today offers considerably better value than on 03/31/01, and maybe that is the reason we see more stocks meet the screen.

I have been thinking about putting in a formal feature request to generate graphs, though I haven’t quite figured out how to word it, and also what excactly to be asking for. Definately something along the lines of fwouters graph, perhaps showing the median quintiles or something like that of each factor. Discussion appreciated!

This is a fascinating topic. Robustness is something widely sought but so far, rarely achieved. Case in point: the current financial mess where the models that imploded were created by some incredibly sophisticated mathematical minds who certainly can preach robustness while chatting over coffee but couldn’t incorporate it into their work. Clearly, the conventional measures of statistical significance do not suffice (they, like the models from which they derive) are sample specific.

The key to robustness may be a matter of understanding what exactly is happening during cyclical turns, which is where much of the factor reversal and style drift issues tend to surface. And this raises the question of whether true robustness is really possible (without driivng returns to unacceptably low levels).

If factor reversal is to be conquered, I expect it will take some sort of "regime shifting"model wherein we use objective rules/models to forecast which factors are about to become more or less relevant. I haven’t had an opportunity to research this yet (I was preparing to tackle it about a year ago at Reuters but got reassigned to another project before I was able to get it off the ground) but my initial thought is that it would be based not just on market/financial but also, and perhaps primarily, on economic data and that we’d be looking for blips or outliers in rates of change (second, or perhaps even third derivative). We may also want to model rates of change in forecast error relating to our basic models.

I don’t know if any of this makes sense; I’m just thinking out loud . . .

About Filip’s chart;

The number of stocks in the database was increased from about 3500 stocks to over 8000 stocks on 01/26/02. You can see this by the vertical jump in Filip’s chart at that time. This is the point that P123 added most of the Micro caps and OTC stocks. Since the value didn’t double at that time, there was probably a significant difference in the % of these additional companies with a value of Prc2SalesIncDebt < 1 than the % of the larger companies prior to 01/26/02.

So the lowest number of stocks on Filip’s chart is not the lowest number as a % of the stocks in the data base. That occurred near the end of March 2006 on Filip’s chart, about 1 year before the S&P500 peaked.

Another interesting point from Filip’s chart is that the triple highs between August 2002 and April 2003 correspond very close to the triple low of the S&P 500.

Hmmm……

Denny :sunglasses:

Hi,
Very good discussion. When it comes to factors, what’s most important to me is what’s working now. Trying to pick a bottom in a factor trend is almost certain to be as difficult as trying to picking a market bottom. Not that there isn’t value in utilizing something that is at an historic low, but I find it simpler to just go with the flow of what’s working.

I have a few portfolio’s set up simply to monitor the performance of several ranking systems for comparison, so that I can see what is working now. Combining this idea with Filip’s, it wouldn’t be too hard to create portfolio’s to track single factor ranking systems based on each of the top 30 or more factors. Then the factors could be easily compared in the portfolio view. As a silver member I do not have the extra rankings or portfolio’s available to do this. Any gold members want to try this? I wonder how the top 5-10 factors ranked by 6 mo performance 6 months ago, would have done this year?

As an example, following is a sim based off a P2SalesQ ranking, and another based off ProjPENextFY, something that seemed to be working well in 2007. Apparently this factor is not working so well now either.

Both sims used weekly rebalancing, rank < 101 as a sell rule, and buy rules: AvgDailyTot(50) > 100000, mktcap > 100 & Close(0) > 1.

Don



I had thought about using the % of stocks as well but it required just an additional iteration in excel. I always use min price>1 and liquidity above 100000, so the total universe is about 4500 stocks instead of the +8000 if you do not use the filter.
I decided to apply the same approach to another factor Pr2SalesPTM<1 which also shows big run-ups followed by deep declines. I now added the equity curve as per the screener to better identify the linkages. The graph is basically available in the screener but because the formatting is set to a small size it is difficult to really “read” it in detail.
This factor tells a different story. It shows that despite the declines the % of stocks below 1 is still very low. The lowest point was achieved on 7/31/06 at 19.1%. At 7/14/08 the % of stocks stands at 19.5%. This probably means that sales versus PTM have been declining quicker than price and therefore the value is not there. I read today the same argument for the S&P 500: The PE for the index has actually risen while the index declined as the earnings have declined faster than price.
I must admit that as we are looking at relative value within a universe, I did not follow the absolute levels. I wish Portfolio123 would offer more tools that offer readings on an aggregated level like the absolute evolution of a factor’s value for sector X or industry Y or universe Z. That way we can better evaluate the absolute “value” that is offered in additon to the relative value.


Filip,

I don’t understand why you used Pr2SalesPTM instead of Pr2SalesTTM in your latest chart. If I understand the chart correctly, the value for the number of stocks whose Pr2SalesPTM <1 that is plotted at the right end of your chart is the value of Pr2Sales for 1 year ago. If you had used Pr2SalesTTM < 1, then the value at the end of the chart would be the current value. The way I would interpret the chart is that it shows the effect the equity curve has on Pr2Sales 1 year later. This seems backwards to me. Am I missing something?

Denny :sunglasses:

Hi Denny,

You are right in that TTM would be a better choice, but I had found that the PTM version is a stronger single factor with a nice upward slope in the ranking performance graph. Can’t really understand either why that is as you would think that the TTM version would be better. Maybe for this exercise it would indeed be better to focus on current values as we are trying to evaluate the current state of valuation.

The best way to dynamically and mechnically discover factors that generate positive alpha is through regression analysis. It is time to have this function available to the user here.

Great thread. I know of a specific quant long only manager that explicitly uses factor momentum. They forecast factor returns based on R-squared of the factor (relative to expected return of a stock) and acceleration of explanatory power (momentum of factor return). There was a thread a while back where a user requested regression tools. The request as I recall was generally dismissed as it would lead to over-optimization and the data mining problem, which is a real concern. However, there is a clear application of factor momentum which should be quite easy to impliment (don’t have to turn P123 into a statistics package) which would provide basic regression (multiple regression as well), R-squared, and T-stats. One could easily envision developing a factor momentum strategy with these tools. If it is good enough for a large, multi-billion $ institutional money manager, it is good enough for us. I am going to request basic stat tools as a feature request. They can be powerful, used in the right way.

Ted

Hi Don,

Price to Book and Price to Sales (combined with financial strength criteria) have been working well in retail. It hit a tough stretch in 2007, but has since rebounded nicely.

http://www.portfolio123.com/blog.jsp?postid=45&topic=idea

Wayne

I would guess it’s not coincident with the top of the market. If you took all the stocks chosen by this screen in the period that it was negatively correlated, and then adjust each for volatility. I wouldn’t think there would be any indication of added alpha (after figuring in for risk for bankruptcy, etc.).

I’m guessing this to be the case because this screen picks up on largely oversold, low-priced stocks. Low priced stocks are more volatile than average, so given the trend of the market, I would suspect “factor reversal”.

You could possibly figure some of it out by looking at the distribution of price and sales… or just assume price is lognormal and then deal with sales. Then normalize both (I don’t know if this is outside the abilities of this service). this could eliminate any shifts in annual returns as a function of the distributions.

Hi All,

     Markets are embedded in the emergent socio-political-economic fabric. This is the "hidden variable."  It's no wonder that factors reverse and styles drift. That's built in.  

    [Math talk: The data is non-stationary]

    As a trader the question is what to do about it.  One possible solution: (1)  Keep a good variety of simple, non-curve fitted systems that backtest well. (2) Determine overall market direction. [I use SMA X-over and patterns]  (3) After a major turning point whatever system is out-performing since that turning point will continue to outperform.

    [Math talk: Regime changes are marked by major turning points.]

    On a personal note:  I deeply appreciate the well thought out contibutions you'all are making, especially Denny & Olikea.

    "Live long and prosper"  -- Bill