Portfolio Risk Control brainstorming

Marco,

I like your PV ideas. Be aware that there is some look-ahead bias in both of those methods. Keep those (I like them too) but possibly add this following method. This is a “timing-method.” The timing may or may not be useful. But probably many will use the timing, I am guessing.

But also it is an example of walk-forward cross-validation. 2 birds with one method. This uses “risk parity.”

I think I have to use a screenshot in this post as you will not have access to timing models unless you pay for a membership to PV (I think).

But this will look at the last 3 months to determine the holdings of the next month, then walk forward a month (looking at the rolled or ‘walked-forward’ next 3 months) to determine the holding of the following month (data for that month not being used to determine the holdings, repeat,………walk-forward till today). No information for the next month is known or used.

So this is a backtest that does not use any information from the future: unlike what we do at P123.

So just like P123 (except with a walk-forward backtest which has advantages). No look-ahead bias. MUCH LIKE A ROLLING BACKTEST BUT BETTER THAN A ROLLING BACKTEST because there is no look-ahead bias.

Then if you like the backtest just use it as a “PORT.” So same thing as P123.

Note. Yuval was a big fan of Omega for a time. If he still is (I have no information on this) that is available on PV also. But lots for everyone to look at. I do not claim the above is the only addition you may want to look at or even the best.

** But I like walk-forward backtests wherever I find them (including PV or maybe P123 if you end up using this).**

This is meant to be keep generic (just the SPDRs, TLT and GLD). Better returns and less risk by most measures (drawdown and Sharpe). I am sure others can find better models. I am not using, selling or recommending it!!! Anyone welcome to criticize with that in mind (I won’t care).

Even now at PV and especially if P123 staff and members vote to adopt this you can use sims that are uploaded into PV instead of tickers.

Jim

1 Like

Jim, this is not about risk models. This is a Feature Request to enhance our Book Simulation that currently only has “target weights” as the rebalance method. No?

1 Like

Hmmm…. I am tempted to say it is both.

For sure it works pretty good for “controlling risk.”

Anyway I am not really making a feature request. I thought it was on topic with regard to “risk models.”

And really it is the same math. This looks at correlations and volatility AMONG the holdings while what Korr123 is looking at is correlation and volatility related to a predetermined set of factors.

My method does take into account additional “factors” like the correlation (or lack of correlation) between XLE and XLF or XLI and XLB which would be correlation among the different sectors.

I might add, it looks at correlations that you might not even be aware of.

My mind sees little difference if you start with a large, diverse set of holdings. My method is limited in that the holding are defined ahead of time while Korr123’s method assumes the correlation (with factors) are constant and will hold-up. And is limited to the factors (just a few for FAMA and French).

One could use both couldn’t they? I would probably look at both. Maybe use my method for the final weights but make sure the factor-loads are pretty good for the stocks ETFs that I include in the final analysis.

But for sure, just something for you to look at and I am not making a feature request here.

Jim

Jim,

Perhaps this is where we should start in our quest to provide more “risk control”. NOTE I renamed the thread to “Portfolio Visualizer “Market Timing” tool for Books”

We already have a Book functionality that allows you to combine your models with other models or ETFs, and specify target weights. But that’s where the dev work stopped.

To make “Books” A LOT more appealing seems like a low hanging fruit provided we focus the effort. The PV Market timing tools is crazy flexible. There are 8 Timing Models, each with a crazy amount of options.

So what’s the MVP (Minimum Viable Product) upgrade for Books?

In your example you choose “Adaptive Allocation” Timing Model with 12 assets , hold top 6 of those, weighted using “Risk Parity” for Allocation Weights.

  1. So which Timing Models are part of the MVP?

  2. Within each Timing Model which are the MVP time consuming options to add (ex. Allocation Weight)?

  3. What other key settings are needed in the MVP that are easy to do (for ex “Leverage”)?

Thanks!

.
PS. Some other thread is needed to discuss specifics of an MVP for Risk Factors

Marco,

I do not love that subject title but it is my fault. It is too limiting (i.e., only relative strength in the title) and not the main thrust of what I was trying to convey (modern methods of controlling risk using correlation and variance matrices).

I do think that is a good way to look at what I am suggesting: a machine learning and/or mainstream financial tool for books.

IT WAS A MISTAKE TO ADD THE TIMING MODEL IN MY EXAMPLE. Although I do use it sometimes. You probably want to keep the option of timing if it is not too hard to code.

But my greater point is once you decide on a set of assets you can optimize those holdings in books now (without timing now).

Or you could optimize using some algorithms including risk parity in PV. Perhaps with risk in mind and not overfitting for returns.

So to not cherry-pick I usually use the SPDRs, a smaller-cap ETF, GLD and TLT. I.e, not optimizing with a high-flyer tech ETF that I already know did well in the past to give eye-popping results that will not continue out-of-sample.

Here is an example without timing. It does reduce the risk. And is a mainstream idea in the financial industry.

Probably more realistic (and less overfitted) that optimizing in books.

It would be a widely accepted MVP addition to books that recognized in the financial community (with or without timing), I think.

Without timing (all assets held all the time but weights changing as correlations and volatility change). XLE, XLU, XLK, XLP, XLB, XLY, XLI, XLV, XLF, IWM, TLT, GLD below.

I think it might be just me on this forum but I like that this is a mainstream, well-established idea in the financial community.

Ask Riccardo about that maybe. If he has a more established method I would like to see it.

Me personally: I absolutely hate the ad-hock of-the-cuff , overfitted ideals we see in the forum—which is not to say I am not responsible for the majority of that in my posts :thinking:

My main recommendation is pass it by Riccardo and see if he already studied the idea or can find it in the index of one of his texts. At least it should be an ESTABLISHED machine learning/statistical tool if it is not yet mainstream in the financial industry, I think.

Jim

1 Like

BTW, I am not recommending any of this be put on any priority to-do list for now.

Obviously, for more advanced machine learning you would shrink the matrices using Leidot-Wolf’s algorithm wouldn’t you? Very easy to do in Python BTW. All open-source.

But mainstream economic theory would be a start at P123 (e.g., risk parity, minimum-variance, maximize the Sharpe ratio, hierarchical risk parity is becoming mainstream).

But what is main-stream might best be determined by Riccardo I would guess. P123 already has a priority-list for machine learning. But perhaps that will be ongoing with some of the top priorities being completed and replace with new priorities?

None of this is hard in Python or PV so just some ideas. Not a request on my part but some ideas for what to use from PV mainly. And I defer to Ricardo (and others with experience in finance) on what is mainstream.

Jim

1 Like

Hi Marco and Jim,

My 2 cents if I may…

I also think timing models are somewhat of a red herring here. Most timing models do not require mean variance optimization and are typically implemented using ETFs, so I think that most should be possible today in p123 using existing simulation functionality. The adaptive allocation timing model is an exception as it requires portfolio optimization in addition to the timing logic. Likewise, if you wanted to support timing models at the book level, i.e. timing models composed of strategies and not just ETFs, then that would also require changes to p123.

I think the more fundamental question is what portfolio optimization use cases should p123 support and what does that minimum viable product look like? I see several open questions.

  1. Portfolio optimization at the book level and/or strategy level?
  2. What optimization targets?
    a. Maximum return
    b. Maximum sharpe
    c. Minimum volatility
    d. Maximum utility
    e. Risk parity
    f. Others
  3. How are the return and risk parameters estimated?
    a. Estimated from historical returns and covariances (this is PortfolioVisualizer’s Historical Efficient Frontier)
    b. Using user-supplied forecasted returns (this is PV’s Forecasted Efficient Frontier)
    c. Historically using a walk forward rolling window (this is PV’s Rolling Portfolio Optimization)
    d. Many other methods (e.g. using a risk model if trying to estimate for a large number of assets, etc)
  4. What constraints should be supported?
    a. Leverage
    b. Individual Weight Constraints (Min/Max)
    c. Group Constraints (Very useful for controlling long or short constraints at the book level)

As mentioned the trading multiple versions of a strategy thread, I had a specific use case to apply portfolio optimization to the problem of building a book composed of multiple versions of a strategy. I am using this in a live book that I’m trading already. Using the API, I am querying p123 for my strategies’ historical returns, compute a max sharpe portfolio optimization using historical returns/covariances with a target long/short allocation set. I then manually input these weights as my static book weights in p123.

So using the above questions as a guide, my use case is (1. book, 2. b. max sharpe, 3. a. historical, 4. c. long/short group constraints).

Personally, I think there is more juice to squeeze here applying portfolio optimization at the book level than at the strategy level. The expected returns for my strategies’ top-ranked stocks that it holds are equal, and I already have p123 formula weighting to tune their holding weights based on volatility etc. There are currently no p123 tools to manage book weights. And as I mentioned in the other thread already, book level portfolio optimization could be very useful to increase adoption of designer or p123 models into users’ portfolio mixes.

While not strictly necessary for a minimum viable product, I would also include 3c. rolling optimization and 4a. leverage into my list. I can’t do 3c. using the API as p123’s books only support static weights, nor can I currently change leverage at the p123 book level, and both are currently very limiting.

3 Likes

I was writing this before I read @feldy reply . I’ll go through it in more details later, thanks.

Starting to think we should just focus on having tools and building blocks to facilitate doing risk control/optimizations outside of the P123 UI (UI work is always time consuming, requires coordination with data scientists, UI designers, and requires multiple iterations to get it right).

Some UI tweaks will be needed, but most of the heavy lifting would be outside using APIs to download data, upload results, and automate.

Here are some key missing things then

  • T.B.D enhancements to Books (website)
  • Enhancements to make it easier to run long/short strategies (live or simulated) which right now requires three manual steps (website)
  • Whatever other tool is necessary to eliminate the most tedious of operations (website, API)
  • API to download the time series of strategies
  • API to update the weights of a Book

NOTE: Our DataMiner seems like a natural fit to run this outside of P123 website: it is written in Python, can be used by non-programmers, and it’s is open source. This risk control/optimization project might just be the use case that makes people contribute to it. (nobody has so far). Also the DataMiner can be run on our servers with your instructions file and we can achieve full automation of the rebalance process.

Another benefit of this approach is that I don’t have to go back to school to get an MBA :slight_smile:

Just a question.

I do not use the API. I probably could but I just hate munging or data Wrangling.

One direct question. Yuval is good at math and programming the P123 platform.

Is he using python and the API now? I am not really concerned about what Yuval is doing but if he isn’t then that says something about the API, data wrangling and the market for some of this.

Jim

1 Like

I use the DataMiner quite a lot and the API a tiny amount. I have a friend who has written some Python programs for me, which I now use. I haven’t gotten around to learning Python myself.

P123 introduced linear regression not too long ago and I’m still playing with that. If multilinear regression is the next feature, then perhaps creating risk-factor models won’t be extremely difficult for users who want to put the time into it.

2 Likes

Apologies in advance, but I’ve been stalking this thread. I recently upgraded my subscription thinking I’d be able to set portfolio weights using risk parity, minimum variance, maximum Sharpe, etc. I’m not blaming anyone but myself in that regard, but I wanted to voice my support for these features (if this is the correct forum for that).

Also, I should mention I joined Portfolio Visualizer after reading Jim’s posts, and I will agree that it’s a very appealing product from ease of use, intuitive interface, and, most specifically, the position sizing capabilities (which Jim has eloquently pointed out). Portfolio123 is an incredibly powerful tool, and IMHO I believe the addition of many of the features mentioned in previous post’s could make the product more of a practitioner’s tool than an academic’s tool.

Just my input…

thanks

“set portfolio weights” where ?

  1. In a Portfolio Strategy (live or simulated) where the assets are either stocks and ETFs determined using a ranking system and buy/sell rules, and weights are decided based on equal weight, or formulas
  2. In a Book (live or simulated) where the assets are predetermined and could be: other Portfolio Strategies, individual ETFs and stocks

Thanks

#1 - hopefully I have not mischaracterized anything.

OK, so all you are asking is to add more options to our Formula Weight, with whatever constraints are appropriate for each method

2 Likes

Marco,

I will let Hunterirby speak for what he is looking for. But I think that would be extremely cool. Extremely. I might have to give up my PV membership if P123 did that with ETFs.

Jim

Sounds good. I will add to the RoadMap. What I like about this enhancement is that we can use it in three different places, three birds w one stone: Stock & ETF Strategies Position Sizing and Book Assets Position Sizing

For example Risk Parity seems straight forward when this guy shows you, and no need for a solver. Very excellent video, no pauses, no mistakes, an impressive mind:

3 Likes

Thank you Marco,

How could that not be the optimal portfolio (with some slight alterations) with the assumption that your ability to predict returns is limited? If you think you can predict returns well then “Maximize Sharpe Ratio” may be for you. Or Maximize Sharpe ratio with shrunken returns using something like Ledoit-Wolf or Bayesian analysis.

Ledoit-Wolf is a part of Sklearn and can be found here: LedoitWolf

Black-Litterman is a Bayesian method also. This is a well accepted method in the financial industry and has been around for a long time. It can be found at Portfolio Visualizer. I do not know much about it. Kind of clunky, the few times I have tried it anyway.

One caveat in practice. If you load up with a lot of low volatility low return ETFs it will overweight them more than most P123 members desire. Your Sharpe ratio might be pretty good but you would be lagging in the returns department.

If you use MINT for example it will expand the holdings of MINT up to give it “equal risk contribution.” The result will be low risk and low return.

This can be addressed in 2 ways:

  1. don’t put MINT into your portfolio or limit the number of holding like MINT

  2. leverage some of the assets to create equal expected returns among the assets.

Ray Dalio does that with his Bridgewater Funds (the largest Hedge Funds in the world) He leverages some of the assets (e.g., he leverages TIPS). His idea of Risk Parity is a little different and probably not what you want. RPAR and UPAR ETFs can give you that anyway. UPAR is a leveraged version of RPAR.

Also the video mentions the weakness of relying on historical correlations and volatility.

I like to do a rolling or walk-forwad method that looks a shorter recent periods for the covariance matrix. You will find that stocks start to get really volatile in drawdowns (i.e., increased risk) and the algorithm naturally reduces the holdings as the volatility increases helping with the drawdowns.

Far from perfect and not a crystal ball. But less overfitted, I think. After all it is one simple algorithm with limited ability to overfit with multiple iterations (less degrees of freedom in mathematical parlance).

One possible advancement is hierarchical risk parity: the results are not that much different when I use it an may not be worth any additional trouble implementing it. But see for yourself.

Obviously, where to put this in the priority list is not set in stone. I am sure everyone will learn more about this method (I have with this thread) and consider or reconsider its value. But thank you Marco for looking at the idea and advancing the discussion!!!

Jim

1 Like

just a quick edit on anything I said above which I did not actually re-read. But i welcome anyone calling BS on any of it.

Test anything I said or test your own ideas at Colab, with or without Bard’s help with any coding or statistical questions. Bard will download the pricing data from Yahoo for you. Probably it can send you that pricing data in a csv file through Google drive or even Dropbox and any modifications (e.g., converted to returns).

Even if not BS you probably want something different than I may have suggested above. And you can test it yourself.

Jim