Factor Effectiveness over Time

Walter,

You are right.

But I just like American Capitalism. Free competition of the best ideas.

Everything passed through a central committee? Are we in American now?

Oh yea. Socialism is cool with young people now. Forgot.

-Jim

And to be clear. ANN makes Jim Simon’s early computers (at Renaissance Technologies) seem like……well, just machines;-)

Primitive machines at that. That is not a boast for ANN. It is just a fact. A fact because of the progression of computer thechnology and human knowledge. Google had not created TensorFlow when Jim Simons started. Everyone is familiar with Moore’s law on the doubling of computer power every 18 month. Simons started a while ago.

Yes, more data can be good, IMHO. I might be wrong but that is my opinion.

-Jim
[/quote]

Jim,

I think you will like this article from Financial Times.

Regards
James

Renaissance, DE Shaw look to quantum computing for edge
Major hedge funds start to experiment with quantum computing as quant investing evolves

Renaissance Technologies, DE Shaw and Two Sigma are among hedge funds that have started to experiment with quantum computing, dismissed by some but heralded by others as the next great revolution on the horizon for the finance industry.

Quantum computing is built on the brain-twisting fact that atoms can exist in different states and places at the same time. Harnessing this in practical ways was long theoretical but companies such as Google, IBM and smaller quantum computing-focused start-ups are trying to turn science fiction into fact.

The implications are profound. Even the most powerful supercomputers are based on a binary system, a series of on-and-off bits usually expressed in ones and zeroes. Quantum computers use “qubits” that can be one and zero at the same time, allowing them to store and process information at dizzying speeds.

That is beginning to intrigue parts of the so-called quantitative investment world, where computers trawl markets for profitable patterns. While it is early days, the multi-state qubits of quantum computers can in theory crunch some of finance’s knottiest problems.

Marcos Lopez de Prado, a quant researcher and fellow at the Berkeley Lab, says: “You need to decode markets and find the invisible patterns. The people that do that best have the best models and the most powerful computers. It gives you an edge. “It’s amazing what we could do with quantum computers.”

Vern Brownell, chief executive officer of D-Wave, the first to develop a commercially available quantum computer, says his company has had “a lot” of discussions with banks and hedge funds.

“No matter how many computers you put together, you’re not going to get this power,” says Mr Brownell. “It’s a greenfield area of exploration for finance.”

While he declined to name clients that have tried D-Wave’s quantum computers, Two Sigma, Renaissance and DE Shaw as well as WorldQuant — the hedge fund industry’s most sophisticated players — are among those exploring the field, say people familiar with the matter. DE Shaw has also invested in QCWare, a quantum computing software start-up.

Alex Wong, managing director at DE Shaw, says: “We have been exploring quantum computing because we’re interested in new computing paradigms. We’re interested in any technology that has the potential to leapfrog existing approaches, particularly if it can surpass both the speed and reliability of traditional computing.”

Some computer scientists point out that D-Wave’s process is not so much “true” quantum computing but something known as “annealing” — using some of the practical properties of quantum mechanics.

Moreover, many quants say computing power isn’t really an obstacle any more, given the clout of modern chips, and they are sceptical that quantum computing will be a viable tool for the foreseeable future.

“We’ve gotten pretty good at doing some pretty awesome things with classical computers, that we couldn’t even dream of 10 years ago,” says one hedge fund technologist. “Faster isn’t always better. Just because you can listen to a podcast at five times the speed doesn’t make it more understandable.”

Mr Brownell shrugs off the criticism of D-Wave’s machines, pointing to their demonstrated — and increasing — power. Google tested the 1,000-qubit version in 2015 and found it was 100m times faster than a single conventional computer core. The latest D-Wave 2000Q model unveiled this January boasts 2,000 qubits.

Indeed, a small group of quants led by Mr de Prado are excited enough to set up Quantum for Quants, an organisation to discuss developments. “It’s a compromise that was very criticised at the start. But instead of a pure quantum computer D-Wave went for a viable machine,” says Mr de Prado.

These “quantum quants” say that initially the most promising fields for the embryonic quantum computers are in “portfolio optimisation”, in other words, arranging in real time the best possible basket of various assets and securities for shifting market environments.

Quantum computing’s unique characteristics — put simply it is more suited for calculating a vast series of potential outcomes than binary-based classic computers — make it well suited for this problem, says Michael Sotiropoulos, global head of quant research for equities trading at Deutsche Bank.

“In finance we solve a lot of problems that have probabilistic outcomes, like portfolio optimisation,” he says. “With classic computing that can take a lot of time. But quantum computers don’t just say one plus one equals two, it gives a probabilistic prediction.”

Quantum computing is in its infancy “but the proof of concept is there”, argues Peter Carr, formerly head of market modelling at Morgan Stanley and now chair of the finance and risk engineering department at New York University.

“Some calculations can get very complicated, and you need brute force to solve them. When it becomes reality — and it isn’t quite yet — we can solve all sorts of problems,” he adds.

James,

Like it and scares be too.

Much appreciated. Thank you.

-Jim

Walter with all due respect I don’t see a purpose to longer incubation time. Even at 1yr you can have poor results that are still within a model’s expected result window but look terrible. It doesn’t mean that model doesn’t follow it’s expected return. I think many people (designers included) have an unrealistic expected return plotted in their heads. The market is dynamic and must be treated as such. Statistic results are a static value based on a best case scenario from a fitted backtest (how fitted is up for debated but nobody puts out a model that didn’t perform positively in the past).

This thread got derailed!

So to try and bring it back to my point. After seeing the results of this test (and by no means do I think this test was granular or complete) it was enough for me to open my eyes as to 2 frame of mind when model making. We can create a ranking system that is simplistic enough to encompass factors and formulas that have mostly positive slope in most market conditions, or we can create ranking systems that work better in certain types of market conditions. We would then switch between them if we feel we have the know-how to identify where the turning points may be. To take it one step further we could actually create a single ranking system for both by using a conditional node at the root.

We can’t predict the future but we can take a deeper look into the past and ask why that might have occurred, then question what the odds are of that happening again. Significant or insignificant, if significant enough to justify further investigation then try to put odds on how consistent a result was through the periods in question. I’m trying to improve my confidence interval with a ranking system. We put a lot of work into our ranking systems and they are the heart of our trading systems. If they are giving us ranks that recent history would actually hold negative slope then my odds are against me that I can pick winners with the buy rules. In those cases I should change the ranking or get to safety (in and ideal world).

Marco really does need to add a “Like” button to the forum. Anyway, consider it being clicked in response to this post.

“We would then switch between them if we feel we have the know-how to identify where the turning points may be.”

Yup. That’s the way to go.

“To take it one step further we could actually create a single ranking system for both by using a conditional node at the root.”

If there’s ever to be a legitimate effective quant approach, this (“regime switching”) is what it will have to be about. I made some tries but frankly, this sh** is H-A-R-D (unless we want to data mine regimes, in which case it becomes easy).

Marc,

Just to agree that this is hard.

As far as quantitative approaches I tried this:Random Forests and boosting.

I tried it with boosting (only). Not exactly the same time-periods used in the paper

I got great results! I really did: in-sample. Out-of-sample was exactly like the benchmarks. Still not sure why the authors could do it but I could not.

You may say this is just timing for the SP 500 in the paper. It is. I tried it with the Sector SPDRs hoping to know which ETF to buy/sell and it could be used with rotating ports that purchase within the sectors (if it worked). But it just did not work for me.

Of course, just because I haven’t done it yet does not prove that someone else cannot.

But I agree. I think it is hard—for me at least. Hard for me using the quant methods and technical factors (using the general approach used in the paper with boosting)

-Jim

A lot of people believe that it will be possible to identify these turning points and then to switch regimes. That’s the basis of tactical asset allocation (TAA) as practiced by allocatesmartly.com, among others. What you’re proposing may be slightly different.

Personally, I don’t believe that one can consistently identify the turning points and switch regimes at the right time. I believe an all-weather approach is far safer. However, I would be willing to be convinced otherwise if anyone could show me some folks who have practiced regime-switching successfully out-of-sample.

See https://backland.typepad.com/investigations/2019/01/market-timing-tactical-asset-allocation-and-trading-a-dialogue.html for a longer discussion of this issue in which I try to argue both sides of it.

Instead of looking for regime-changes and trying to allocate accordingly, another approach might be something like the “book” approach: come up with a set of minimally-correlated portfolios (perhaps with different asset classes or market sectors) then try to weight your investment for each portfolio in the book in some optimal way.

There are tactical strategies that attempt to do this, by choosing the active portfolios and weighting them based on a ranking across their history of returns, volatilities and correlations. Here’s the most recent paper I found on the subject . You’d have to work outside of P123 I think, with Python or R.

Russ

Russ,

Here is the entire code for a library that uses Modern Portfolio Theory in Python. Some of the code is used, or commented out (with ‘#’ as you know), depending on what I was doing when I last used this.

I believe–as it is–it gives the minimum variance portfolio which may be what you are suggesting in your post:

pip install PyPortfolioOpt
import pandas as pd
df=pd.read_csv(‘/Users/JamesRinne/opt/files/popt.csv’)
from pypfopt.expected_returns import mean_historical_return
from pypfopt.risk_models import CovarianceShrinkage
mu = mean_historical_return(df)
S = CovarianceShrinkage(df).ledoit_wolf()
from pypfopt.efficient_frontier import EfficientFrontier
ef = EfficientFrontier(mu, S)
#weights = ef.max_sharpe()
#cleaned_weights = ef.clean_weights()
#print(cleaned_weights)
weights = ef.min_volatility()
ef.portfolio_performance(verbose=True)
#weights
#cleaned_weights = ef.clean_weights()
#print(cleaned_weights)
#weights = min_volatility()
LowVol=ef.min_volatility()
LowVol
print(LowVol)
#mu=[.25,.15,.10]

I do not know if this helps. Can you recommend a library that you use?

-Jim

Nobody is under an obligation to convince you of anything. And there is no law that says this sort of thing must be done via statistical model. It’s cool that people try (as have I, albeit unsuccessfully) and who knows what some may eventually come up with along those lines. Speaking for myself, though, I’m not willing to banish the human brain from the process. Even people who work with models can still incorporate the brain. I’ve never advocated sticking with a model for life. When it comes to real money, I’ve constantly moved from model to model depending my sense of the market. It’s like when some bozo took an idiotic cheap shot recently at the Cherrypicking the Blue Chips DM. Well duh . . . I rotated out of that well over a year ago, maybe two . . . Who remembers.

It’s OK to try to come up with a single good-for-all-time model i that’s your thing. But life keeps throwing new things around and at some point, you may be called upon to face the famous Q&A (Q: How long can the market stay wrong? A. A lot longer than you can stay solvent.)

Wow that is helpful, thank you. I was working with the R implementations of the FAA and EAA algorithms but maybe the PyPortfolioOpt stuff is the better way to go now. When I was looking at these methods a few years ago, I did not find anything in Python, but Python is growing by leaps and bounds.

In the past I was looking for ways to combine my P123 ports with other investments and have some intelligent way of weighting the assets to achieve max Sharpe. But now this discussion is getting me interested in trying these portfolio optimizations to automatically allocate across a set of P123 ports having very focused, minimally-overlapping strategies.

My expertise is data analytics not finance, but I guess my intuition is telling me that me that this might be an approach I can implement with the data and tools at hand to address the regime-switching problem.

Thanks - Russ

Marc,

That being said about rotating out of you your blue chip cherrypick strategy, how do you judge when to cut bait? One has to make a judgement as to whether a strategy is just temporarily underperforming, underperforming for the foreseeable future due to unfavorable economic environment, or just plain a bad strategy (hopefully that was avoided with good model conceptualization).

Jeff

great points. And that the model one switched to did indeed outperform.

Jeff, that is when you need some “adults” at the table to tell you what to do.

In the meantime you could have invested in 50 of the 250 over- and undercovered stocks on Seeking Alpha as supplied by Seeking Alpha via email to regular authors on Jun-26-2019. Attached is a sim showing how they have performed since then. “Only” a miserable 40% return over the last 7 months.

Also attached are current holdings csv file.



Holdings_50GemsfromSeekingAlphasUnderOvercoveredStocks 1-21-2020.csv (4.45 KB)

Georg,

Do SeekingAlpha’s undercovered and overcovered 250 stocks typically outperform to this degree or is this an outlier? Very good returns nonetheless.

Jeff

No doubt: “This sh** is H-A-R-D”

And I, for one, will need to see some proof before I believe that anyone can do much better than Ray Dalio while investing in larger-cap stocks. Or that anyone here at P123 is doing as well as Ray Dalio with large-caps.

Ray Dalio does present some objective evidence that perhaps a discretionary rotation of strategies can work, I think.

Ray Dalio has about a 12% annualized return over the life of his Bridgwater Associaates Fund.

My understanding is that the research shows that most retail investors would be better ofF not trying to do this. That most retail investors are not so good at using their “domain knowledge” or discretionary judgement. That most retail investors are good at “switching” at the exact wrong time.

I DO NOT mean to imply that someone with a graduate degree in finance should not try this. Or that a retail investor shouldn’t take that approach if they decide to.

-Jim

Jeff, I posted this on the forum in June last year.
https://www.portfolio123.com/mvnforum/viewthread_thread,11813#!#68595

Anybody here on P123 could have analyzed and used this data then, but apparently nobody did. So this is not some secrete information to which only I had access to. I don’t know whether the performance is an outlier, but over the last 7 months this is all out-of-sample performance without survivorship bias. Had one had this universe 4 years ago then the sim shows an annualized return of 50% and low turnover of 100%.

The point I am trying to make is that it is a futile exercise to come up with an universe yourself to select stocks from. It is better to “piggy-back” on fund managers selections and crazy lists like the one from SA.


I for one am glad we have the Designer Models.

This is the only thing that—for the most part—resembles objective “research” at P123. De Prado says backtesting is not a research tool for determining what factors are effective. I tend to agree. Even if not perfect seeing something out-of-sample is better evidence.

The Designer Models put some sort of limit on what we should be forced to believe in the forum.

If someone wants to tell us what we should be allowed to do (or vet our decisions) then, usually, looking at their designer models will help us understand whether they should be telling us what to do or not. Whether we really need to be lectured to or not.

Yea. We need the Designer Models for now.

-Jim

Jim,
I understand your point but to what end. It sounds as though you would only consider someone’s idea if they have a 6 month OOS DM to back it up. I for one am always looking to test new ideas and I can tell you that the sharing of strategies/ranking systems/ideas has dramatically dropped off since DM inception. Just go back in the forums Pre- DM and you will see a plethora of sharing.

Brett,

You clearly misunderstand. I have been defending people expressing ideas on the forum. And there are some good ideas.

While others (not me) have been saying things like the ideas expressed are from “chlidren” at the kiddie table.

I just like having some idea of who’s ideas are actually working.

As far as the old days. I have great respect for Oliver Keating and Denny Halwes (before the Designer Models).

Not sure that it is bad to see how those ideas are turning out before I commit my savings to those ideas.

But understood. You would like to get rid of designer models.

For the record I suspect the Designer models are good (with few exceptions) just not good enough to overcome the slippage. One could debate whether overfitting contributes, and if so, how much. But it takes a lot to overcome the constant drag of slippage for small- and micro-caps. Designers should not be criticized, IMHO.

-Jim