Python code for calling 123 API

Steve - It would be 3 credits per date as you said. Just be sure your understanding of datapoints is correct. An example of 1000 datapoints would be 2 formulas/factors for 500 tickers for 1 date. This formula I used in the explanation is how Data/DataUniverse will work once we are done with the modifications we are working on now.

regards,
Dan

I haven’t been following this credit thing too closely because I wasn’t using them that fast, but whatever you just changed made the whole API thing useless to me.

Earlier in the week I did an API pull and it took about 700 credits. Now I just started the same pull and it immediately shot up to my quota of 10,000 without finishing.

Steve,

I think Phil wants returns for label or target and preferably excess returns of some sort. He can speak to what, exactly, he wants with regard to excess returns or anything else, however.

And in fact, if I were at P123, I would make a point of asking Philip.

-Jim

Thanks Jim. Yep I figured it out, you can use “Data” to do it.

I went to pull the labels (or returns or whatever name you’d like to use) and immediately ran out of credits (10,000 quota).

Btw I don’t mind doing all this testing to try and get it to work, but I am not going to pay extra to test it.

phijoe - given the limitations on API resources, I would refrain from doing anything until everything is done and documented on P123’s end. So far, all I have seen are vague statements about some features being available. Right now, it isn’t clear what is ready to go or what state the API call documentation is in.

OK I think its been fixed now. Unfortunately now it is telling me I need a data license to access close price data, however I thought the closing prices were supposed to be available for free.

The price data is available without a vendor data license in the Rank operation. An example in DataMiner would be to add this at the end of your script:
Additional Data:
- Close0: Close(0)
- Future 1MoRet: Future%Chg(20)

Price data will be available in the Data and Data Universe operations soon - that is in development.

Is that in any of the documentation? How would I do that in python?

Dan - you have me confused. I thought that you guys weren’t using the Rank API for this new stuff. Are you saying that the Data API won’t return what Philjoe is asking for?

Steve, Sorry if this is confusing. Its a moving target since development is in progress. It will be clearer once we finish this round. The questions didn’t mention any endpoints, so I was providing info on the 2 possibilities. If you want price data TODAY without a data license, you can get it with Ranks. As you said, using Rank wont be the preferred method, but this is not available in the Data endpoint/operation yet. Coming soon.

Philip - The API documentation is here: https://api.portfolio123.com:8443/docs/index.html#/Rank/_ranks
Below is a basic example of how to use additionalData in python for the ranks_ranks endpoint:

    {
        'rankingSystem': 'Core: Value',
        'asOfDt': "2020-11-12",
        'universe': 'DJIA',
        'additionalData': [
            'Close(0)',
            'mktcap',
            'FRank("Pr2SalesQ",#All)']
    }

Dan - I thought that the Data API was eliminating the need for extra data in the Rank API so I don’t understand why the Rank API is or has been reworked. It is just adding delays to getting the Data API done, isn’t it? I can’t speak for philjoe but it doesn’t make sense to have to run two different APIs and try to concatenate the results in the short term or long term for that matter.

This is what we are doing. should be ready very soon.

We are re-working two API endpoints /data/universe and /rank/ranks . Both of these will be able to download technical data (past & future) and the ZScore or FRank of a formula. So should be well suited for ML/AI. With a data license you will be able to download anything. One API call will only download the universe’s data for one date. For now we will enhance the DataMiner operations that use them to do multiple dates at once (like 10 years weekly).

The /data endpoint will not be suitable for AI/ML . It’s intended for small lists of stocks, and at the moment requires a data licence no matter what you try to download

The future performance functions are live now. I will post some documentation . We’re also working on samples, knowledge base, etc. Hang in there.

Thanks

OK thanks - for the Rank API, it is one resource unit per date? And additional limitations depending on the number of data points?

Both the data/universe and rank/ranks endpoints will have a variable cost depending on the number of data points retrieved. The minimum cost is 1 API request of course. For data points we’re changing the cost . Instead of 5K data points being 1 API requests we’re upping it to 20K data points 1 API request.

Marco - this is pretty much a non-starter. Just as an example, I have a project that requires 5 years of weekly data. But I am looking into the future 1 year. So I need to separate the training/validation/test datasets by 1 year. This adds two more years. Then I have to split off the last year which becomes prediction dataset. That brings me up to 8 years of data. Then, I want to throw out the pandemic year because it isn’t representative of normal markets. So I am looking at retrieving 9 years of weekly data. This amounts to 9 x 52 API calls or 468 API calls just to capture one set of data that may or may not be even close to a final dataset for the project. It is just one stab at a solution and I am going to have to repeat the data collection many times as I evaluate inputs and targets.

So the fact is that most people here won’t be able to finish one project without buying more resource credits. And because of that, your Big Data attempt is going to fail. Adding insult to injury, you are allowing deep pocket customers with a data license to retrieve data without the one date = 1 resource limitation, meaning that you are providing preferred treatment to deep pockets. Maybe that is the intent or maybe you just don’t realize it.

In any case, when people taste big data they will want more. But if they run out of resources before they taste it, then it ain’t going to happen.

Steve,

we’re still tweaking things.

Forget the api requests for a second . From a purely data quantity aspect we’re thinking of a cost of around $200 for 1Billion data points

Can you show me the math for a use case that uses way more than 1B data points ?

And for a one time cost of $500 you can get 500K api requests that can be used to download 10B data points. Doesn’t seem like big pockets to me !

Marco - you seem to think I am commenting on the price per datapoint. I’m NOT. I could care less what you charge for pulling data. I am saying that if you want a grass roots movement then get rid of the resource unit per API call. That’s all.

Resource units & api calls are two different , separate things. Resource units are the storage space of your account.

Did you see them mixed up somewhere?