IMPORTANT: New subscription plans

The new membership structures will require the removal from one’s account of many screens, simulations, portfolios, books, ranking systems, formulas and lists.
P123 should provide an easy way to store them off-line so they do not count towards resource units and make them directly up-loadable again to P123 if desired. This would reduce server traffic which currently update all live models, whether needed or not.

Good suggestion, Georg. I would not mind if resource units that are currently not used could be moved into a sandbox. This sandbox would not be immediately accessible or does not update, but we could still decide which of them we like to move in and out. Just deleting them is not an option.

I agree with George. The ability to retain screens, custom formula, etc off line would resolve the resource unit problem. It would also give piece of mind that my years of work is held locally.

Just did a calculation of my resource units and it stands at nearly 2000. Most of that is in custom formulas built over many years, the bulk of which are highly specialized and only get called occasionally depending on the project I’m working on.

To delete them is NOT an option.

Caveat: I’m on a legacy plan, so for now I’m covered. But with the new plans and resource limitations, to upgrade in the future makes no sense at all at the moment. Unless my work can be held locally and uploaded when necessary.

Ditto on Georg’s suggestion. Moreover, I don’t understand how imposing resource units on resources that do not contribute to data use (e.g., custom formulas) has anything to do with Capital IQ’s new mandates. Seems overly restrictive.

I assume this means international data is on its way shortly?!?!?!?

Regarding resource units:
The sole intention of the resource units limit is to simplify the growing number of discrete limits for the numerous types of things a user can save to our system. It’s in no way meant to be restrictive or burdensome to existing users. Individuals remaining on their legacy membership level will not have to take any actions and will not currently have to conform to the resource units limit. We’ll review the impact of resource units on existing Asset Manager and Research users to see how we might mitigate potential issues. We don’t want this to drive away business.

Regarding international data:
We have not in any way come to an agreement with S&P regarding international data at this time.

atw:
What we consider quant engine requests:
For example, running a simulation backtest (excluding book simulations), a screen (excluding totals), a ranking system performance, viewing the stock rank page. Generally, things that take a noticeable amount of time likely count toward this.

There was a typo in the Asset Manager Tier 1 limit that has been corrected.

whotookmynickname:
The contract with S&P requires a rolling fifteen year maximum lookback for historical stock data.

The download API will be a service we’ll provide (details to come).

paulnovell:
Research Tier 1 is for Research users with under $15K monthly revenue, not over.

nisser:
The intended audience of Individual Tier 1 (Screener) is individuals wanting to screen stocks and ETFs on specific dates. We would expect these users will prefer screening with the latest data, but we offer 5 years of stock history if they need to check something historically.
The intended audience of Individual Tier 2 (Backtest) is individuals wanting to additionally run screen backtests and track their Designer Model subscriptions on our system using live books.
Both tiers offer access to TRADE and will allow users to place orders and view the positions of their linked Interactive Brokers accounts.

This is not something we choose to do. It is forced upon us by the data provider. That said, the data provider is happy to make more data available to anyone who wants it but at a price . . . and as you’ve seen from the initial a announcement regarding professional investors, that price is not likely to be nominal.

Actually, I NEVER go back more than 10 years unless I need to in order to publish something. The bear market around 2000 was unique in its cause and is of no probative value for a potential future bear market. Even worse, the immediate post-bear period was more iconoclastic because the Fed was flooding the market with liquidity and because data-driven platforms such as ours were new enough to make for extremely uncrowded trades, resulting in gains that cannot be rationally expected going forward. (Actually, many stocks that traditionally get slammed in bear markets did wonderfully back at that time because they weren’t as generationally overvalued as the narrow segment that sparked the mess. So testing to 1999 could give you a dangerously inaccurate view of where to aim for the next down period.)

I understand you ad other here disagree with me on this. But I do think it’s important that the p123 community as a whole understand that the usefulness of older data is not an absolute fact but is a matter of debatable opinion.

The obvious concern for those of us who are grandfathered is at what point will legacy plans be eliminated entirely?

What were the final terms of the negotiation with S&P regarding legacy plans?

I expect that P123 is as upset about the 15 year limit as the users. For users that are being required to purchase an S&P license at $24,000/year still be subjected to the 15-year limit?

Some more clarifications

Re. resource units

A few power users can (and have) brought our database down. But we also would like to better monetize power users regardless of the resources they use (screens, ranks, formulas). We will monitor these limits and adjust accordingly. Some limits do seem too low. We will approach this scientifically and revise to satisfy the majority of normal usage and give power users (usually 5% of the population) the option to buy more resource units.

Re. APIs

These are in the works. There will be three types: system requests (rebalance, backtest), download derived data (ranks, holdings) and raw data (P.I.T. factors , financials). The latter will require a S&P license

Re. EU data

These new memberships had to be in place first. However, using S&P for Europe is not set in stone!

Re. 15 years backtest

This is not as bad as it sounds. Lots of data was missing from 1999-2004 (insider, institutional, short interest). Also estimates were spotty until end of 2001. That’s what S&P has no matter how much money you throw at them. Also, perspectives change, analysts opinions change, markets change, etc. But please , please , realize that simply having 20 years will not make your system “ROBUST”. Not by a long shot, no matter how much you think you are not curve fitting. I also believed it was possible to avoid curve fitting due to my engineering mindset. This was painfully obvious when many curve fitted models fell apart. What will make your system robust is your experience and using the 15 year backtest to make sure it’s doing what you think: has the expected turnover, its buying what you think should be bought (we’re going to add tools for this, like “as-of” stock pages), not buying penny stocks, etc. I would say that to do a brute-force, robust system you probably need 30 years of data.

Re. S&P license

With an S&P license you get 20 years rolling. However it’s currently going to 19 years since our systems start in 1999. You will get the full 20 years next year.

Thanks for your feedback.

Does Europe include Japan by any chance

Hi Marco,

Understood. I would just suggest that P123 quota runtime resources and not static resources like formulas, ranking system, and simulations. The amount of static resources - i.e. storage - needed to hold the sim rules + ranking system + universe is likely very small.

Best,
Walter

EDIT: Perhaps for power users, P123 could offer something like Amazon AWS instances - but hosted on P123, of course.

xxx

James, please keep off-topic discussion out of this thread.

Sorry, thought hearing from someone who appreciates the legacy membership and who might even want to find a way to pay more could be refreshing.

Probably should have asked about international data (as above).

-Jim

[quote]
Re. EU data

These new memberships had to be in place first. However, using S&P for Europe is not set in stone!
[/quote]I am not familiar with all the data provider choices but I wouldn’t at all mind if you would use Reuters for international data (and possibly for U.S. data too).

[size=3]Advantages of Reuters[/size]
Bargaining power
Right now, S&P probably thinks it knows that you are limited to them. This gives them all the bargaining leverage over you. They raised prices and cut service. If they see that you have other realistic options, it gives you so much more bargaining leverage.

Price
My impression is that Reuters is cheaper than S&P. Some sites charge $250 a year for Reuters int’l data.

Estimate revisions
Reuters seems to use a different source for analyst estimates. When you switched from Reuters to Compustat the performance of estimate revisions based systems went down.

Number of stocks
Reuters seems to have more stocks than S&P does in their database. For example, in the Canadian stock market S&P covers 1,520 stocks. Reuters has 2,354 stocks. This is a huge advantage for some people.

It worked
From 2005-2010, we did very well using Reuters data. In fact, Reuters has more valuable estimate revisions. Some people noticed a degradation in performance after the switch from Reuters to S&P, especially small investors are able to take advantage of the expanded opportunities in smaller stocks and the better estimate revisions.

[size=3]Advantages of S&P[/size]
PIT fundamentals; what the market knew and when they knew it. In the old days P123 worked around this problem (correct me if I’m wrong) by synthesizing PIT data from the ten year financials and by snapshotting new data. So in practice, this was rarely an issue back then.

The S&P indexes. For some reason certain systems work better within S&P 500 and 1500 companies. In addition, some people prefer to use S&P indexes as a benchmark. This issue does not seem to be insurmountable.

EBIT and EBITDA based value ratios worked better for some reason with S&P data, which was apparent when you switched over from Reuters to Compustat, possibly because Compustat standardized EBIT and EBIDA better. On the other hand, Reuters now has adjusted EBIT and EBITDA which may mean that they have caught up to S&P in this area.

Implementation
The cost to implement and switch over to Reuters (or any other data provider is not minimal).

The bottom line
For some users (such as myself) Reuters data may actually be better.

You may want to consider setting up a separate site starting using Reuters data. It would be like the old days when you started P123, except that it would be able to become operational much faster due to your experience. This would save you the trouble of switching over the old system.

Hi Marc:

I always enjoy your post - even when I disagree.

“Actually, I NEVER go back more than 10 years unless I need to in order to publish something.”

    My view - I would not be comfortable using a model that tested great for the last 10 years (2008-2018) and did terribly for years 2003-2007. Just testing for the most recent 10 years (and assuming the method would have done similarly well for previous years) is taking an unnecessary risk. There is nothing to prevent the conditions of 1999-2003 from reoccurring in the future so it would be good, if possible, to have test results from that period. 

“The bear market around 2000 was unique in its cause and is of no probative value for a potential future bear market. Even worse, the immediate post-bear period was more iconoclastic because the Fed was flooding the market with liquidity.”

My view - The 2000 bear was unique. Similarly the 2007-2008 recession was unique. The market dislocation in the fall of 2001 was unique (due to terrorist attacks). All unique. One could argue that the exceptional gains of 2017 (Trump effect and anticipation of lower tax rates) were unique. Should we exclude 2017 from our testing because it is unlikely US corporate tax rates will be reduced by another 14% (down to 7%) in the future. Every year is unique in someway, that’s why more data is better. 

“and because data-driven platforms such as ours were new enough to make for extremely uncrowded trades, resulting in gains that cannot be rationally expected going forward”

My view - True, what worked in the early and mid 2000s will not work as well today because Portfolio 123 and its users (and others using different services) have made the  markets a bit more efficient – not totally efficient, but more efficient than they were. Thus one can’t use results from those years to expect the same gains for today. Today’s gains will be lower. However, test results from those years are nonetheless useful to me. 

" (Actually, many stocks that traditionally get slammed in bear markets did wonderfully back at that time because they weren’t as generationally overvalued as the narrow segment that sparked the mess. So testing to 1999 could give you a dangerously inaccurate view of where to aim for the next down period.)"

     My view - If we ONLY used data from 1999-2003 to develop a method, we’d be exposing ourselves to unknown risks in other market conditions. However, without 1999-2003 data we would not have as clear evidence of how risky dot-com style stocks could be. Data from 1999-2003 is valuable. If it had little or no value, the data provider would not charging big bucks to professional clients who want access to such data to do as extensive robust testing as possible.

“I do think it’s important that the p123 community as a whole understand that the usefulness of older data is not an absolute fact but is a matter of debatable opinion.”

My view. I expect such disagreements have to due to with different type of trading/investing a person plans to do and to personality differences regarding confidence levels. I once read a book which argued that data older than 2 years is worse than useless. His view was that market behavior changes from year to year so be in step with the current market one should not get confused by looking at what worked 3 or more years ago. Obviously he was looking to engage in a very different type of trading than I am interested in. Sure, there is a debate as to whether 10 years, 15 years, 20 years or even just 2 years, is the right amount of time to use. But just because there is a debate does not mean that data from 1999-2003 is useless to everyone.

Thanks for listening to my mini rant.
Brian

Interesting Chipper, I always thought Compustat had better data quality. I was testing out Eikon and found some very basic errors (shares outstanding forgot to include Class B shares which are convertible into regular common shares 1 for 1) and other silly mistakes. This was for US data

Is that not the case for global data?

Agree.

If I recall, one of the reasons for the switch to S&P was Reuters tried to up its charges to Porfolio 123 at the time. Could it be that S&P gave a good initial deal get the business away from Reuters and it now figures its time to make some money from P123? Or perhaps, S&P is just trying to catch the pros who have signed up with P123 to avoid what S&P directly charges professionals? Being an individual (non business) subscriber, I hope its just the latter and not the beginning of future price increases from S&P.

Having lived through the change from Reuters to S&P, I’m not keen to do the reverse. As I recall, the switch to S&P resulted in a big improvement in the data before 2002/2003. P123 started with little or no small cap data for 2001-2002 as I recall and no data at all for 1999 or 2000. So going back to Reuters could make our current concerns of pre 2003 data look small.

Brian

I for one was very happy with Reuters data. PLUS, being able to download screen results unrestricted was a huge benefit.