To Quant Or Not To Quant, That Is The Question

My assumption is that everything (fundamentals, estimates, and prices) gets updated each night (ie Monday, Tuesday, Wednesday, Thursday, Friday nights). We get to use this fresh daily data whenever we manually rebalance a portfolio during the week.

My understanding is that things are different when you run a simulation. The simulation has access to weekly data for fundamentals and estimates (ie the final value of the week) but does have access to daily price and volume data (so daily TA formulas can be used in sims). Only saving weekly data for fundamentals and estimates saves P123 a lot of storage space.

Am I mistaken? Could one of P123’s staff shed light on whether this understanding is correct?

Brian

o806, My understanding is estimate revisions only update weekly, but other price and fundamentals are daily. I’ve bookmarked the link below that shows update status.

https://www.portfolio123.com/server_status.jsp

Hi Barn,

thank you, very good ideas. My tradings system gets certain stocks regulary wrong, will look into that!
Best Regards
Andreas

Brian, even much better then a monthly update on! Once again on other platforms you will be able to do with higher controll (e.g. you can program
everything yourself), but first the data is the gold and how p123 is managing data (PIT, Price + Fundamentals) should not be underestimated, here is the gold!

Best Regards
Andreas

But Factset isn’t PIT. Right? That’s my understanding of the future dataset.

Hi Walter,

Factset PIT → I had understood that it was PIT “good enough” - maybe not exactly the same PIT as S&P but likely to be decent and hard to figure out if one is better than the other. (I am aware that writing PIT “good enough” makes no sense from a theoretical / purist point of view. In theory there is only one version of the truth, but in practice …)

See → https://www.portfolio123.com/mvnforum/viewthread_thread,11932_offset,10#69595

Jerome

Hi Jerome,

It’s a bit of a mess to me. Raw FactSet appears not to be PIT. Some consumers of FactSet data then make and keep data snapshots in order to construct a PIT database. I suspect the PIT moniker will fall into disuse here. You can’t sell what you don’t have.

Walter

umh… quite concerning…

SpacemanJones, thanks for the link.

Apparently I misunderstood Marco’s post from June 2018 when he said,

o806, I might be wrong. Maybe that estimate update date only impacts the weekly figure that is used for backtests? Probably best to ignore me on this given your info.

Some of our estimates data changes weekly, some daily. For backtesting, we only use weekly data with the exception of prices. But for current screens and strategies, some estimates data will change from day to day.

Thank you Yuval,

You said: “…for current screens and strategies,…” The words seem to be chosen with some care here.

Does this mean P123 will be changing this after the move to FactSet?

Thank you for any information regarding planned changes.

-Jim

Yuval,

Could you clarify what “some of our estimate data changes weekly, some daily” means? The reason I ask is I typically do not trade on Mondays but I would change my routine if Monday has the freshest estimate data.

Is the reason some estimate data changes weekly because there is no change in the estimate data on any day in the week — which would typically be the case if a stock is covered by only 1 or 2 analysts. In such a case, the estimate data would be “fresh” even though there no change from the previous weekend.

Or is the reason that your data provider has a systematic bias towards updating different types of stocks more frequently than others (ie large caps get updated daily but small caps get updated weekly because the latter has a lower priority).

Or is it that some types of estimates (eg for earnings) gets update daily but other types of estimates (eg for sales) gets updated weekly.

More light would appreciated.

Regards,
Brian

I’m sorry I was unclear. Your last guess is the correct one: some types of estimates data get updated daily but other types get updated weekly. The EPS revisions, for instance, are weekly only, while the EPS estimates are updated daily.

Yuval,

I hope I’m not giving the impression I’m trying to split hairs, but I’m still not sure if the Estimate factors I use are updated daily or weekly. But I really need to know since one of my systems is very heavily reliant on the freshness of the following factors:

CurFYEPSMean
CurFYEPS1WkAgo
CurFYEPS4WkAgo
CurFYEPS8WkAgo
CurFYEPS13WkAgo
and also
NextFYEPSMean
NextFYEPS1WkAgo
NextFYEPS4WkAgo
NextFYEPS8WkAgo
NextFYEPS13WkAgo

Are the above updated daily or just on the weekend?

Thanks in advance for your attention to this. I really appreciate it.

Brian

These are updated daily. But only weekly for backtests (like all our data except for prices and volume). The only estimates data that isn’t updated daily is the estimates revisions.

Yuval,

Thank you taking the time to clarify this so precisely.

It is greatly appreciated.

Brian

There is a rather scathing article in the WSJ about the value of using AI to select stocks. The writer is Mark Hulbert and the author of the study is Prof. Avramov. Unfortunately, I do not have an online account, but it focuses on a number of topics that we have been discussing here. All interested should track it down. The upshot: like traditional quant methods, AIs market-beating performance (on paper) disappears in the real world (because of over-reliance of microcaps, slippage, etc.) In fact the portfolios of the traditional quant funds looked similar to the AI portfolios!

Relating the article to the topic of this thread brings up the question: Will going “all-in” on quant tools make P123 a more viable platform with a growing number of subscribers?

I still think the following illustrates the huge market that P123 is missing (the non-quant or semi-quant): I often get asked about what I’m doing in the stock market (people know it is my profession). I tell them I have developed these new models that HELP me select stocks and it’s FUN. I talk about screening, which they have all heard about, and I mention the backtesting capabilities which they are impressed with after a simple explanation. Then I tell them about your product (screening) and remind them that it is fairly easy and a lot FUN.

I was able to search Google and find this article. Unfortunately I do not have an account either.

I have cobbled together a small amount of data from another source and I set out to use all of the common machine learning tools starting with regression, Ridge Regression, Lasso regression………., Artificial Neural Nets.

This has been more of an exercise than anything useful. The amount of data I have is small and not P123 data. I make no claim that I can make any comparison to P123’s methods or to fundamental analysis for that matter. That is not to say that I have not developed some beliefs that–bottom line–I cannot prove now.

But I do understand and have used all of the common methods.

The only method that I have limited experience with is support vector machines (with the kernel trick). This is computer intensive. Every time I try it I end up shutting it down after it has run 24 to 36 hours. Who knows how long it would run if I left it alone.

So here is my simple question keeping within the context of Doug’s post: "When in my progression from simple regression to artificial neural nets did I move from “quant methods” to “AI?”

I can tell you this. I did not see a parting of the clouds or hear a voice from the sky when I moved from Random Forests to something more advanced. The neural net did not start talking to me in Siri’s voice. I am not sure I would call it an “artificial intelligence.”

So my point is, I do not think there is a line between “quant methods” and “AI.” If there is, it is a line that I was not aware of when I crossed it.

But uh…Just like the movies, I have noticed that the neural net talks to me in my sleep and it wants me to buy a more powerful computer connected to the internet. It keeps saying something about world domination;-)

-Jim

Doug,

Here is the article from WSJ. Like my post earlier, it highlights the difference between backtest and out of sample performance this time for AI.

Regards
James

Use AI for Picking Stocks? Not So Fast
AI investing strategies, when put into practice, don’t produce particularly unique portfolios, a new study finds
By Mark Hulbert
Jan. 5, 2020 10:06 pm ET

Artificial intelligence burst onto Wall Street several years ago, to fanfare and hope. Unfortunately, AI-based investing strategies have struggled to live up to some of the more inflated expectations for their performance.

There is no denying these strategies’ theoretical promise. By being able to sift through otherwise prohibitively large amounts of data, and then “learn” from it, AI is supposed to be able to discover profitable patterns that were previously invisible to mere mortals.

And, sure enough, they appear to have done so—on paper. Doron Avramov, a finance professor at the Interdisciplinary Center Herzliyah in Israel, says that when tested using historical data AI strategies have been phenomenally successful, beating the market by as much as 40% on an annualized basis.

No other approach has come even close to producing that kind of a profit.

Making this market-beating potential even more alluring is the deteriorating profit of many of the well-known factors (or stock characteristics) that previous research had identified as having value when picking stocks—such as momentum, market cap, volatility, low ratios of price to earnings, book value, sales and so forth. Researchers have found that more than half of the paper profit that initial studies reported for those factors disappeared when they were put into practice.

Unfortunately, according to a new study recently completed by Prof. Avramov and two colleagues ( Si Cheng of the Chinese University of Hong Kong and Lior Metzker of the Hebrew University of Jerusalem), the same thing is true about AI strategies. In the real world, their market-beating performance almost completely disappears.

INVESTING IN FUNDS

Reality check
Prof. Avramov and his colleagues reached this conclusion after re-creating several different neural networks (a set of algorithms designed to recognize patterns) and other machine-learning techniques that past AI researchers have found to be worthwhile. They then fed into these networks virtually all of the indicators that previous research had found to have at least some value when picking stocks—more than 100 in total. They then “trained” their network on a database of U.S. stocks dating back to 1957, looking for interactions between, and combinations of, these indicators that were more profitable than any of them individually.

A number of alarm bells started going off as they examined the portfolios that their networks produced. For example, they noticed that much of the portfolios’ paper profits were coming from microcaps—stocks with tiny market caps. That’s troublesome because so few shares of these stocks trade that it’s difficult to establish a sizable position in them without causing their prices to skyrocket. It’s also difficult to borrow shares of these stocks when you want to sell them short.

This heavy reliance on microcaps is just one way in which the AI strategies often make unrealistic assumptions about the real world. Upon restricting their AI strategies to stocks that were relatively easy and cheap to trade, Prof. Avramov and his colleagues found that more than half of those strategies’ paper profits disappeared. And that was before transaction costs, which could easily eat up the remainder of those strategies’ theoretical profits—given that machine learning generates much higher trading volume than that of conventional strategies such as momentum and value investing, according to Prof. Avramov.

Mediocre machines
A perhaps even more surprising conclusion from the study is that the portfolios the AI strategies produced weren’t particularly distinctive. On the contrary, Prof. Avramov says, they were quite similar to portfolios produced by the well-known factors. In other words, these machine-learning techniques largely failed to live up to their promise of finding previously hidden patterns in the stock market.

All in all, it appears that there is “many a slip between the cup and the lip,” to quote the ancient proverb.

This perhaps helps to explain why hedge funds that employ AI haven’t outperformed the S&P 500 over the last decade. Consider the Eurekahedge AI Hedge Fund Index, which “is designed to provide a broad measure of the performance of underlying hedge-fund managers who use artificial intelligence and machine learning theory in their trading processes.” Since its inception in January 2010, the index has produced a 12.7% annualized return, in comparison to a dividend-adjusted 13.3% for the S&P 500.

These results don’t mean that AI is worthless, Prof. Avramov is quick to add. “It’s just that its potential has yet to be proven,” he says. “AI definitely has promise, perhaps not just as much promise as some have made it out to appear.”


1 Like