How to find cross-sectional Rank or RankPos

Hi, People.

I’m trying to determine to rank (or rank position) of the lowest ranking stock in my portfolio’s holdings.

  • I would want to do this in my buy and/or sell rules.
  • I would settle for median rank, or something like that.

Something like a #Portfolio scope would work, but this is not supported (???).

Any help is appreciated.

Best,
//dpa

Open a screen, set your universe and ranking system to the one you use, then type in the rule Portfolio([portfolioID]). Run the screen. The lowest-ranked stock will be the bottom one, and it’ll give you the rank. If you want rankposition, you can just add a rule Rankpos and run the screen again, using the “Screen Factors” option as your report.

Thanks, Yuval.

What if I want to pass the lowest/highest/median rank/rankpos into a Portfolio’s buy and/or sell rules? Can I do that?

No, I don’t think so. But what exactly are you trying to do? There may be another way to do what you want to do.

I am trying to rank expected returns of stocks in a universe. This is a challenge because I use expected position sizes to determine expected returns (i.e., expected returns increase as transaction costs decrease; expected returns decrease as expected market impact increases).

The ways I thought to do this was to:
a. Compare a stock’s expected return against the position with the lowest expected return.
b. Pass parameters of the Portfolio (e.g., account balance, # of positions, etc) to a Ranking System.

My end goal is to pick stocks with the highest Kelly criteria in an equal weighted portfolio. However, unlike the basic academic examples that are provided in the literature, I am introducing transaction costs and non-constant rates of returns. I am a sucker for mental torture.

You and I think alike. I’ve been working on this for a while. This is what I do.

First, give every stock a number of points based solely on rank position. Let’s call it P. Rankpos = 1 means P = 100, rankpos = 50 means P = 0. I use a logarithmic formula to assign P but there are other ways to do it.

The next step is to come up with a constant for the expected return of your TOP-RANKED stock BEFORE transaction costs. I use 4.56%. Let’s call that R.

The formula for the expected return would be RP/100 - S/2 - 0.02*(I/(N*D))^0.5 where R and P are defined as above, S = bid-ask spread as a percentage of price, I = total amount invested, N = number of stocks held, and D = median daily dollar volume.

So, for example, if a stock got 67 points, had a median daily dollar volume of $750,000, a bid-ask spread of 0.1% of price, with a $1 million portfolio invested in 30 stocks, the expected return would be 0.045667/100 - 0.0005 - 0.02(1000000/(30*750000))^2 = 3.00%.

You can use custom formulae to simulate buying stocks based on expected return using this formula. Boil everything down to one final formula for your desired weight. I’m attaching a couple of screenshots to show you the settings I use on my simulation, but these are my personal preferences. The idea behind the custom formula would be that the buy amount gets lower as the expected return gets lower.


Screenshot_2020-03-26 Simulated Strategy Trading System - Portfolio123.png


Screenshot_2020-03-26 Simulated Strategy Trading System - Portfolio123(1).png

Thank, Yuval!

I am doing something similar, but not identical.

I like that you’re letting the expected returns dictate your holding size.

I am trying to implement buy and sell rules based on the expected return, taking into account costs and all that.

Again, similar, but not the same.

It’d be awesome to have a system setting which could buy the ‘n’ or ‘x’ percent of the highest weighted stocks (that also pass the screen). This is a deviation from the current inner workings which say buy the top ‘n’ ranked stocks.

One of the nice things about formula weight rebalancing is that it can take the place of strict buy/sell rules. It adds a lot of flexibility. If you can come up with a formula that weights certain positions at zero or below you can actually leave the buy and sell rules blank and have a great deal of flexibility about the number of positions you hold. So, for example, you might have a formula that calculates expected returns and a stock ranked 9 will have an expected return of -0.1% because of its low liquidity while a stock ranked 12 will have a positive expected return. You plug that into your position weight formula and presto. I think you might be able, with custom formulae, to design a “setting which could buy the ‘n’ or ‘x’ percent of the highest weighted stocks (that also pass the screen).”

This is great. How do i go about finding one of my portfolios ID?

Your port id is in the URL when you’re looking at the summary page (with the returns chart). For example

https://www.portfolio123.com/port_summary.jsp?portid=1004872

That’s the Portfolio123 GARP $100K public portfolio. Its portid is 1004872

Okay. Thank you for your thoughts, Yuval. I’m pretty close to a linear comparison of expected returns. What’s now blocking me is that I cannot access factors on Portfolio holdings that are normally available through a Portfolio’s sell rules: e.g., NoDays, GainPct, etc.

These are important b/c I want to avoid selling things based on capital gains (as well as avoid buying things that are subject to wash rules). Not to say I wouldn’t do these things, just they should be part of the calculus.

Is there anyway you could expose these factors to the buy rules? If a stock is not already held, the factor should return -1 or NA, just like the Portfolio() factors are currently doing.

Thank you!

David,

Two things:

  1. How can you be doing Kelly Criterion if you do not look at correlations?

  2. Do you have to use P123 for everything? You are pretty adept at Python, I think.

If you are willing to use Python, let me start from scratch with what I do.

So with Kelly Criterion the weight of a position for optimal Kelly = μ/(σ^2) if the risk free rate is 0

The expected long-term compounded growth rate g = ((μ/σ)^2)/2. Again ignoring the risk free rate.

So this simplifies to maximizing μ/σ. Maximizing μ/σ will maximize your growth (if μ/σ is positive)

Fortunately the weights that will optimize μ/σ can be found using this code in Python:

pip install PyPortfolioOpt
import pandas as pd
df=pd.read_csv(‘/Users/JamesRinne/opt/ReadFile/poptindex.csv’)
f1=df[1257:]
f1_slice=pd.DataFrame(f1[[‘QQQ’,‘XLE’,‘XLU’,‘XLK’,‘XLB’,‘XLK’,‘XLB’,‘XLP’,‘XLY’,‘XLI’,‘XLV’,’XLF’,TLT’,’AGG’]])
f1_slice
from pypfopt.expected_returns import mean_historical_return
from pypfopt.risk_models import CovarianceShrinkage
mu=[m1,m2,……,m14]
S = CovarianceShrinkage(f1_slice).ledoit_wolf()
from pypfopt.efficient_frontier import EfficientFrontier
ef = EfficientFrontier(mu, S)
weights = ef.max_sharpe()
cleaned_weights = ef.clean_weights()
print(cleaned_weights)
ef.portfolio_performance(verbose=True)

Where QQQ,……AGG are the column headings for ETFs’ price history in this example. But you can use the price history of any equity, port or ETF, obviously.

You can set your own expected returns for each equity or ETF with mu = [your expected returns for each equity]

Furthermore the program does allow you to use leverage and short positions so you should be able to find the true optimal Kelly within your leverage and short constraints.

You add (or change) this line of code to leverage 2X and allow shorts: ef = EfficientFrontier(mu, S, weight_bounds=(-2,2)).

You can set your expected return and the program uses the historical volatility and correlations.

FWIW. Probably not ultimately useful but I thought it could be useful as it gives you more (correlations) and ultimately seems easier even if you cannot calculate everything within P123.

Hope you get some use out this. Sorry to distract from the purpose of this thread if this is off topic

Best,

Jim

Thanks Jim! This is really interesting!

I will try this out.

I ignore correlations in favor of focusing more the components of mu. I think there’s more juice to be squeezed from deconstructing assumptions about constant returns than there is in juicing the MVA.

As you’ve shown, this is more easily done in an offline program using “test” data. But that’s the rub… I need production data in a prod environment.

David,

Thank you and glad that perhaps this is helpful.

I have a slightly different view. While I agree that correlations and standard deviations are not very predictable I believe they can be more predictable (and persistent) than expected returns.

While there is some literature to suggest that expected returns are not as predictable, I think that must depend on what model is used and cannot be generalized. So any opinions I have on this almost certainly is not true for someone else and probably not even true for all of my models.

So I am in both camps. I ultimately want to develop some recurrent neural nets for prediction of expected returns. I am not there yet and maybe this is just a hobby.

I have had some success using simple multiple regressions. While still probably just a hobby, it only takes a minute or two to do this so I have some results.

Being in the other camp too (it is not a law that I have to be consistent), I sometimes use the minimum variance portfolio (which ignores expected returns). This leads to a conservative portfolio that does seem to do well in drawdowns.

For this last to work well I do what Ray Dalio does. I only include things with similar expected returns. I may slice AGG out of the DataFrame for example. As you probably know Ray Dalio uses leverage to get the expected returns the same before he does any analysis.

So not married to any view, some of this is just a hobby but….

I definitely enjoy the discussion and I hope some of this is helpful.

BTW, code for minimum variance portfolio if you find the Python/minimum variance portfolio of interest:

weights = ef.min_volatility()
ef.portfolio_performance(verbose=True)
weights

BTW, there is a lot that can be done with P123 to backtest. All of the price data can be downloaded easily. Books can be weighted as per the Python program to get equity curves.

Position size within sims are a problem. In addition to the above methods one can be deconstruct the Sims and weight them in a Book (sometimes). E.G., sim one buy rule RankPos = 1, sim 2 buy rule RankPos = 2….Sim 10 buy rule RankpPos = 10. This can then be put into Books (sometimes).

I have made feature suggestions. But I am pretty happy with what I can do now. These were just suggestions that might potentially attract pros such as yourself.

Best,

Jim

We’re working on a couple of things that might help. One is buy-driven simulations, where you sell your lowest ranked stock automatically if and only if a new stock appears in the buy rules. The other, which is probably more applicable to this situation, is a “do not rebalance” rule in the portfolio-weight rebalance tab where you can enter a rule (e.g. StaleStmt = 1, NoDays < 15) and it won’t rebalance any stocks that fulfill that criterion. Unfortunately due to the FactSet transition these improvements have been backburnered for a bit.

I will like both of these enhancements, Yuval.

I think it makes sense to limit rebalancing when the micro-transaction costs outweigh the incremental benefits.

– btw, anyone have any luck with zero-free brokers? That would change the dynamics here significantly.

I also think it make sense to explore buy driven rules.

– If it’s not worth buying, it may be sell or it may be a hold. If it’s not worth holding, it’s definitely a sell.

However, neither of these enhancement tackle my current blockers. It’s great we have ranks, but I need to have ranks or positions based on expected returns.

– Expected returns are largely a function of rank, but are also influence by expected fees, market impacts, wash sale taxes, cap gains taxes, etc. These are in turn partial functions of portfolio sizes, current holdings, and past holdings.
– Yes, this seems complicated, which is why I am hoping to create a linear comparison criterion.