Rank vs RankPos: Which is better?

TL;DR

RankPos is better than Rank in sell rules because it keeps a consistent sell point as the underlying universe changes size over time. Do you agree or disagree?


I wanted to get some input on the use of Rank vs. RankPos in sell rules.

Until recently, I used the Rank function to decide when to sell a position. A typical sell rule was something like “Rank<95.5”. I would arrive at the 95.5 number by running historical tests and optimizing for the best sell point.

However, after some research I realized this approach may be fundamentally flawed because the size of underlying universe can change dramatically over time. When you use Rank in a sell rule, it is corresponding to a very different sell point sensitivity depending on the era.

For example, let’s say I use a simple custom mid/large cap universe with these rules:

MktCap > 2000
AvgDailyTot(200)>15000000
AvgDailyTot(20)>15000000
Universe(MasterLP)!=TRUE
Price>5

The size of that custom universe has changed dramatically over the historical testing period:

1/4/1999: 460 companies
1/1/2010: 911 companies
1/1/2018: 1365 companies

So by using my “Rank<95.5” rule for the historical testing period, I’m actually selling stocks much more quickly in the early days than I would now. For example, if I have 10 positions in my portfolio, in 1999 the 95.5 rank corresponded to the 20th position in the universe. In 2018, it corresponds to the 62nd position in the universe.

So by using Rank, in the early era I was buying the 10 best stocks and selling them anytime they fell out of the top 20 spots. Now I’m buying the 10 best stocks and selling them anytime they fall out of the top 62 spots. Clearly this is a different strategy that will produce different results, right?

This would also explain why in backtests the turnover is much higher in early years than in modern years. The 95.5 sell point was being hit much more quickly than it is today.

Again, this is because my custom universe has gradually gotten larger. And if you use a dynamic universe that changes sizes on a monthly or annual basis, this problem could be even worse.

So it seems like the best solution is to instead use RankPos so that you’re always employing the same sell point regardless of universe size. So for example, “RankPos>30” says I want to buy the 10 best stocks and sell them anytime they fall out of the top 30 spots. This would not be affected by changes in universe size.

Thoughts? Do you agree? Or is Rank better for reasons I’m missing?

Thanks for your feedback!

I don’t know. I guess it depends on usage, but I’ll say when I backtest (not sim) I like to use Rank. Rank >95 or Rank >90 is a pretty common backtest rule I use, especially when the number of stocks is changing over time. I view it as “give me a backtest of the top 5% or 10% universe based on the ranking system.” In the case of your example, at Rank>90, that means my backtests would select 46 companies initially, then by 2010 would be selecting 91 companies, and by 2018 be selecting 136 companies. I’m ok with that if I’m testing a ranking system.

The alternative would be selecting say 40 stocks to backtest with, and initially in 1999 that would be the equivalent rule of Rank>91.3. In 2018 those same 40 stocks would be equivalent to Rank>97. I’d expect the later results at Rank>97 to perform better than a backtest set at Rank>91.3, so I’d have concern that my backtest results are being skewed. (In other words, I’d expect the backtest to perform better over time under a fixed # of stocks instead of using the percentile cutoffs).

But that’s for backtesting, and I’m sure there’s uses for both. But it’s a consideration. Big changes in universe size are going to cause issues of this kind though. If universe size isn’t changing it isn’t really going to matter though.

One idea: to have your constraints be adaptive so your universe scales over time, you can use a ratio tied to market value level. I use something like this so that my constraints are lower when market values are lower, and higher when market values are higher - sortof like a GDP deflator

I have a custom formula I use a formula:
SPRatio2016: Close(0,$SP500)/2238.83

If I recall, this value is 1.0 at some point in 2016 (probably start of yr. edit: I checked and the calc is actually 1 at the end of 2016), and as the value of the SP500 gets smaller in the past, this ratio gets smaller, so I can multiply my volume or mktcap constraint by this ratio, and my historical constraints get smaller or larger as the market gets higher or lower. It won’t eliminate the issue with your universe size changing so much, but it can help some. There might be some issues or concerns about doing something like this (historical liquidity and avg mktcaps might not scale similarly as this ratio so , so better ratios might be appropriate), but I do use it as a “helper” to counteract inflationary tendency in the data.

I think it depends on multiple factors including how quickly you rebalance. RankPos with a weekly rebalance will kill your alpha b/c of the slippage. If your system is THAT GOOD that something at rank 98 always outperforms something at rank 96, then by all means go with Rankpos. I just know my systems generally aren’t that precise and by using something like Rank 80/90 all I’m doing is buying quality top companies and selling them when they becomes less good.\

Even with your small universe of 460, you’re still buying the top 10% of companies with 50 positions.

Very interesting. If I’m understanding, you guys are talking about using Rank>X as a buy rule to carve out the top X% of the universe in backtests. Is that right?

I was more thinking of Rank<X as a sell rule.

What about your sell rules? Do you use Rank or RankPos?

Instead of Rank, I exclusively use RankPos in the sell rules and see no problems doing so. For me, it seems natural to use a count like ranking threshold when the number of positions is fixed; a 20 position portfolio will have a RankPos somewhere over 20. I’m not sure what the issue is with slippage and weekly rebalance + rankpos. My models tend to have transactions with a high average return and long holding periods.

Walter

Walter - exactly!

If you’re buying a fixed number of positions in your portfolio, it only makes sense to sell them at a fixed sell point (RankPos).

The alternative would be “buy the top 1% of the universe and sell positions when they fall out of the top 5%” In that case, you would use Rank. But that wouldn’t be a fixed size portfolio. That port size would change based on the size of the universe (1% * universe size).

Thanks for your input.

Todd,

I use RankPos as well.

But if I see that my universe size is changing dramatically over time, I’d do something about that first. A universe of 3000 stocks is going to work differently than 1000 stocks even with RankPos.

For example, if liquidity is the constraint, I’d put FRank(“MktCap”) > 40 or something like that into the universe rule. This way the cutoff varies over time with the overall level of the market which in turn is loosely tied to inflation.

Imagine if you were able to backtest on data going back to the 1920’s and you put in MktCap > 50. Possibly only the very largest mega cap stocks would make the cut back then.

Because basic backtests using the screener, the rolling backtest, and the portfolio simulations are all based on a set number of stocks held, it makes more sense to use rankpos. If you were going to hold the top 1% of stocks in your universe, for instance, you’d have to write some rather complicated rules for your simulation, and you would rarely have 100% of your capital invested. And I would think you’d want your sell rules to match your buy rules. If you’re holding the top 20 ranked stocks, using rank instead of rankpos will indeed result in inconsistent results as your universe size changes, with turnover that varies with universe size.

It is, however, possible to build a simulation using only rank, without a set number of stocks, and then adjust the results to reflect the leverage ratio. And using the screener it’s easy to do with a rule like rank > 98 and setting a significant rank tolerance. But I would argue against mixing the two measures.

But I should add that the best solution is to design a universe that doesn’t change drastically in size over the years. That will give you much better backtests.

Chaim and Yuval - Great points on the changing universe size. Sounds like RankPos is the way to go, but I also need to revisit my custom universe so it doesn’t change so much over time.

Thanks for your input, I really appreciate it!