All,
Azouz’s post is important for anyone really interested in this. The returns are in NMR for, I think, placing a stake or betting on their strategy. It has absolutely nothing to do with whether these are good, stand-alone strategies. Nada, zilch. Maybe they are maybe they are not. Maybe some of both. No way to know. No reason to care.
Without a doubt, the only motivation of the programmers is to get NMRs and they make no other claims. Not entirely different from mining Bitcoin not having anything to do with actual mining (and no one claiming it does).
The details are not too important other than to understand that it is a type of crowdsourcing. In some ways, perhaps, like the predictions markets some of us are familiar with for presidential campaigns. But ultimately, the details are uninteresting other than they are getting a bunch of people to contribute and the contributors are motivated to do it right. They are not filling out a survey that they are not getting paid for. The best way to combine the data once you get it is interesting, IMHO.
Things kind of interesting after looking at Azouz’s link from above:
-
Their data is encrypted so one does not know what the factors are. But the video in the link Azouz provides says “news” can be one of the factors.
-
It is common to look at the predictions for an industry or sector and weigh those predictions for each industry. One can find a clear example of this over at Fidelity where this is done with the “Equity Summary Score.” I am pretty sure some P123 members do this so I am not claiming to have some wonderful new insight. But the link seems to suggest that Numerai also does this.
-
Azouz mentions “Stacking.” Which is universal in machine learning. Arguably we do this when we combine nodes in a ranking system.
P123 could probably breathe new life into the Designer Models if they employed Azouz or otherwise develop a method for “Stacking” successful Designer models–with Designers opting in or out based on an incentive. Maybe even looking at what Numerai has done as an incentive.
This would probably instill a sense of cooperation in the members. Wanting others to do well because other member’s strategies could ultimately help them if they invest in the stacked strategy. Members would also be motivated to recruit new members that they thought had a talent. A successful model would be a strong incentive for joining P123.
Of course, we already have an AI specialist if Azouz cannot be recruited for the project. But if P123 is serious about machine learning it should not be a question of whether to do this but rather where to put it on the priority lists. A simple version would be…well, simple. Weighing the industries would a little hard. But it would take more than a little space to list all of the programmers I have encountered on this forum whom I think are quite capable of this.
Fidelity has been doing something like this since 2009 with positive out-of-sample results to report. Rivaling the median result of almost any Designer’s results (with or without survivorship bias). Perhaps with no rival at P123 over the same period.
To be clear there are some very bad models at Fidelity (most are bad on their own in fact), but they get weighted using a type of ML or stacking if you will (i.e, the Equity Summary Score). But even some of the bad models seem to work in some industries. And when they are all stacked together it just works out-of-sample.
More simply: Overfitted sim results (or bad port results for any reason) get taken out of the (stacking) equation. And P123 has far more strategies to stack than Fidelity does.
Best,
Jim