P123api rank_perf function - specify rank system directly

The documentation for the rank_perf function in the p123api states that it is possible to refer to the ranking system by name or to specify it “directly”.

I interpret this to mean that I can pass a properly formatted xml ranking system specification to the rank_perf function. However, I am unable to figure out the syntax for specifying the ranking system directly. Any help with this would be greatly appreciated.

Cheers,

Daniel

I don’t believe that’s possible, though I wish it were as it’d save me a lot of API credits.

If you look at the sample settings_ranking.yaml for DataMiner, which uses the p123api, it implies that providing a ranking system in xml format requires an update to the ApiRankingSystem against which /rank/performance is run. See the NOTES section at the bottom.

I’ll still defer to the p123 developers though for an authoritative answer.

# Ranking definition
# 
# There are four ways to specify the Ranking

# 1) Using an existing ranking ranking system. 
# Can be one of our pre-defined systems or yours.
# By name or by it's unique id.
Ranking: 'Core: Value'  # use quotes if name has ':'
# or
Ranking: 12345


# 2) By specifying the Nodes in the xml format used on our website.
Ranking:
    Nodes: in xml format
    Method: NAsNegative #( [NAsNegative] | NAsNeutral )

# 3) By specifying the Nodes explicitly using user friendly syntax
Ranking:
    Rank: Higher # ( [Higher] | Lower | Summation )
    Method: NAsNegative #( [NAsNegative] | NAsNeutral )
    Nodes: see setting_ranking_nodes.yaml
    
# 4) By specifying a Quick Rank formula (only valid in screen operations )
Ranking:
    Formula: MktCap # Enter quick rank formula
    Lower is Better: true # ( true | [false] )
    
# NOTES 
# For (2) & (3) a ranking system called APIRankingSystem will be created.
# This ranking system will be reused by the API and will always contain the
# version specified by the last iteration of the API operation

Hi Daniel. The documentation was incorrect and has been fixed. Sorry for the confusion. You would need to use the ApiRankingSystem as feldy pointed out. The ApiRankingSystem ranking system can be modified in your script using the rank_update endpoint.

Dan,

Is there a way to change the name of the updated ranking system? I’ve run into a pretty significant problem with a piece of software I wrote that relies on the api and I am trying to find a solution.

-Daniel

Daniel,
Rank_Update doesn’t take a ranking system name as input. It only works with the ApiRankingSystem. If you would like to start a private chat we can discuss the issue you are having.

I’d like to see the result of this conversation. Please feel free to continue it here. Dvevin, can you describe the problem you have run into?

Thanks
Tony

Tony,

I am attempting to take advantage of some “embarrassingly simple parallelism” in my algorithm. At a high level, my algorithm queries the p123 rank perf function for some data, runs a local optimization routine, and then queries the rank perf function again. I am able to divide the work up between a number of threads to save time, but each thread needs to access the rank_perf function independently.

Unfortunately, I have to admit that it took a significant amount of time and credits for me to realize a fatal flaw in my approach, which is that thread A and thread B can get the results of their rank_perf queries crossed. If thread A updates the rank sys and then thread B updates the rank sys and then thread A runs rank_perf, then thread A ends up with the results for thread B.

If the p123api allowed for naming of ranking systems created using the api, or allowed ranking systems to be specified in xml format as an input to the rank_perf function I would have a simple workaround.

Barring this I am attempting to develop a mutex of some kind in order to solve the problem, but this will be a performance drag.

Cheers,

Daniel

Daniel, by “threads” I assume you mean you want to run multiple rank_perf at the same time? If so, I don’t think P123 will allow that.

No that isn’t what I meant. The vast majority of the work being done by the algorithm is in the optimization routine being run independent of the p123api. The rank_perf queries are relatively infrequent. The purpose of running multiple threads or processes is to run many optimizations in parallel. Each thread will need to query the p123api occasionally, but these queries will occasionally overlap. When this overlap occurs it creates the problem I discussed above.

Hi Daniel. I spoke to the developer and rank_update can actually update any ranking system using an id parameter like "id": 411969,

This is an undocumented feature with the idea being that there was no reason to update anything other than the apiRankingSystem ranking system since only one ranking system can be updated at a time. If you use it to update multiple ranking systems then you will need to handle the 406 error you will get if a thread tries to run rankperformance while another thread is also running rank performance.

Dan,

Thanks this is great, much appreciated.

Cheers,

Daniel

Dan,

Is there any technical limitation of p123, other than the rank cache, that prevents one from concurrently running more than one /rank/performance call via the API? This is a feature I’d be willing to pay extra for, even if it meant foregoing the cache for those runs.

Unlike Daniel’s use case, my optimization is bottlenecked on the /rank/performance calls to p123, so I’m effectively single-threaded. Without the ability to issue multiple p123 calls simultaneously, my other option would be to do a bulk download of factor data via /data/universe, and build my own mini rank performance engine locally. But not sure if I want to go down that path just yet…

Thanks,
Feldy

No , it’s strictly to prevent overtaxing our servers.

I think it’s time to add a mechanism for users to buy additional parallelism, 2x , 3x for example. We have things that might take a while to backtest, so having an additional “lane” or more, would be very helpful. For example a trained AI model might take several minutes (maybe 10) to backtest.

Should not be too hard to add an “add-on” for parallelism

Thanks

Marco is this “additional parallelism” idea on the road map?
I’d like to vote for it.

Thanks
Tony

I added the request in the Roadmap future. Around 3+ votes would certainly bump it up. Thanks