Update engine with new FactSet-only functions

After digging a bit more into this it seems that the increased volatility in Sentiment Ranks (which leads to increased Turnover in a Sentiment driven portfolio) is being driven largely by the “Estimate Revision” node in the Sentiment Ranking system. As described by Yuval elsewhere on this forum, the Factset engine is not correctly applying stock splits or dividends to the estimate data which could be the cause of this problem (i.e. artificial changes in Estimates increasing Estimate volatility). As such, I’m going to stop work on this until that bug fix is completed, and see if it fixes the problem.

Cheers,

Daniel

That shouldn’t affect estimate revisions, only yearly estimates of stocks with subsequent splits or spinoff activity. Dividends aren’t actually a problem after all–I spoke/wrote too soon.

Interesting. My basis for thinking that it would was that Estimate Revisions CY are calculated in the Core: Sentiment Ranking System as follows

(CurFYEPSMean- CurFYEPS4WkAgo)/abs( CurFYEPS4WkAgo)

If we take the case of AAPL, in FactSet the 2010 CurFYEPSMean is given as 1.87 versus 13.14 in Compustat. Since the current CurFYEPSMean is given as 12.35 in both systems this would imply that the FactSet Earnings Estimates have an implicit upward bias that is not present in the Compustat data. If we extrapolate a step farther one could reasonably assume that a number of companies have an implicit upward (or perhaps downward in some cases) bias in their FactSet numbers. This bias would introduce more volatility in the Earnings Estimates over time, and would possibly show up as increased variability in the Earnings Revisions node Rankings inside of the Sentiment Ranking System. Obviously the easiest test for this will be seeing if the increased variability in Rankings disappears when the bug in question is fixed.

If I am in fact incorrect in my assumption here, do you have any thoughts on what would be causing the increased variability in Estimate Revision rankings? Note that the Current Date Estimates provided by FactSet and Compustat for the P123 Large Cap are basically identical, which to me implies that the problem is something to do with how the FactSet Earnings Estimates are changing over time.

Thanks,

Daniel

I’m as mystified as you are, Daniel. Can you please show me how you figured out that increased variability in the P123 ranking system is due to FY estimate revisions? The reason why I don’t think the bug is at fault is that the difference between CurFYEPSMean and CurFYEPS4WkAgo is just as large/small whether you adjust for subsequent splits or not. And there’s no reason to assume there’s an implicit upward bias since there are plenty of companies with reverse splits. And that bias wouldn’t explain increased variability in this one measure.

Ranking > Reverse Engineering

You will get the standard deviation for each factor within a ranking system. You will know which factor is a culprit.

However this feature can only be done on 20 quarters. Could it be possible to increase it for more quarters? 20 is not that much.

Thanks

Yuval, didn’t do anything especially fancy to come to that conclusion. I have a very simple model that buys the top 25 stocks according to a ranking system and sells then when their RankPos drops below 100. I am running that model on the P123 Large Cap Universe and using the Core: Sentiment ranking system.

https://www.portfolio123.com/port_summary.jsp?portid=1611080

If I run that model on the Legacy engine then I get a Turnover of ~800%, but if I run the same model on the FactSet engine I get a Turnover of ~… well shit nevermind the higher turnover is gone… What did you guys change over night??? Whatever it was completely fixed the problem.

Lol

Well this is definitely a good thing, but I kinda hate myself for spending so much time on this now.

Cheers,

Daniel

I need to apologize for not knowing exactly what I was saying earlier. I now have everything clear. Here’s what’s going on.

All estimate numbers for FactSet stocks reflect FactSet data EXCEPT for estimate revisions (functions like CurFYUpRev4Wk), which are still powered by Compustat.

The ConsEstUp and ConsEstDn functions are under review. We’re not precisely sure what those functions are doing, even after talking with FactSet about it. We’re hoping we can use them to get the equivalents of Compustat’s estimate revisions; if not, we’ll figure out some way to get that data.

The estimate revisions in the P123 Sentiment ranking system are entirely different from the estimate revisions given by CurFYUpRev4Wk and so on. The ones in the ranking system are based on changes in CurFYEPSMean. Those are powered by FactSet, not Compustat, numbers. The estimate revisions under EPS REVISIONS in the Factor & Function Reference are based on the number of analysts changing their estimates, not on the mean estimate.

I hope this clarifies things and again, I apologize if I misstated things earlier.

Any idea how to get the consensus estimate standard deviation of #EPSQ for both one and two quarters ago?

ConsEstStdDev(#EPSQ,-1) and ConsEstStdDev(#EPSQ,-2) result in an error. Or should FHist be used?

These new estimate functions look very promising.

For now, at least, you would have to use FHist.

Can you help to get the correct FHist formula for both cases? I am really struggling to get it right. Also the past comment of Marco “I would NOT use FHist with estimates” makes me a bit uncomfortable.

Thank you in advance for your help.

Marco said that because if you use it wrong, you’ll end up with the wrong quarter’s estimate. ConsEstMean(#EPSQ,0,13) will be totally different in most cases from FHist("ConsEstMean(#EPSQ,0),"13) because the first will point to the most recent quarter and the second will point to the quarter before that in most cases.

For the first case, this should work pretty well: Eval(LatestActualDays=NA, NA, FHist(“ConsEstStdDev(#EPSQ,0)”, Trunc(LatestActualDays/7 + 1)))

For the second, it would be more of a guess–use the same formula but + 14 in place of + 1.

Perhaps we should enable -1, -2, -3 etc in the second parameter. But we haven’t done so yet, and I’m not sure when we will.

As always with FHist, you need to watch out for splits. FHist never adjusts for those.