2020-05-18 09:24:32,394: API request failed: ERROR: You are limited to 250 requests per hour. Please wait until 11:36 AM (12 minutes from now) before making additional requests.
Letâs say I made a file for 1000 requests. Is there a way you can program the Data Miner to do 250, wait for an hour and continue with the next 250 an hour later, etc. Without needing me to manually split into 4 files of 250 each? Can you throttle the file so it doesnât defeat the purpose / making more work for nothing? Thanks
ClientException: API request failed: Unrecognized parameter <columns>
If I donât include the âcolumnsâ parameter in the dictionary, then everything works but I just get the aggregate (or top-level) rank and none of the underlying factor ranks.
I do think this is very nice, has great potential and is a clear improvement in the volume of data that can be downloaded.
But why is it that we donât just expand on what we can do now with Excel downloads?
As it is I am going to use this for the sole purpose of getting an expanded download of something that was never limited by the data vendor (it seems) into an Excel spreadsheet on my office computer so I can upload it into my MacBook to use in Spyder, Jupiter Notebooks or Googleâs Colab.
Again, good that I can download more data now. But why not just expand the Excel downloads?
I am sure there is a good reasonâthat I am not aware ofâso I will not suggest that, I guess.
But it does seem (to this non-programmer) that P123 has to generate the data and download it either way. Are you doing (and creating for members) a lot of work that may not be necessary for even the most advanced users?
I do get that Excel limits the number of rows to just over a million (1,048,576 rows by 16,384 columns) and that means maybe downloading 2 spreadsheets for the most demanding of data requirements. I will be using less than 16,384 factors (columns) most of the time;-) The row limit should never cause a need for more than 2 spreadsheets with any reasonable universe.
philjoe, please try it now. We changed the api monthly limit to 5K/mo from 500 for now till we figure out what to do with API . Weâre also keeping it limited to make sure we can handle the load. The idea is to have it included in p123 memeberships for light use, and have few options for power users.
What would be the right syntax in the Data Miner for this formula? If I understood the read me all I have to do is to put quotes to pass the whole thing formula as a string, so:
This would be the correct value (please note that you have to escape inner double quotes):
âAggregate("EarnYield",#Industry,#Avg,16.5,#Exclude,False,True)â
Had been using Python with Selenium to extract Rank data on the Buy and Sell dates for positions in some of my Simulations. Yesterday I transitioned to using the new API which seems much simpler, quicker, and cleaner to use, but I have run up against Request Quota today. What is the current Request Quota limit? Do I just need to revert back to using my old method?