It could be good marketing to test the AI/ML release in a rigorous way and share the results

@marco

TL;DR: I believe P123 has a unique opportunity to showcase the superiority or simplicity (or both) of its AI/ML tools in a way that’s both rigorous and transparent, significantly benefiting your marketing efforts.

I have received a number of emails from members who are actively exploring machine learning techniques and comparing them against P123’s classic methodologies—usually without direct involvement from P123 regrading ML methods. This grassroots exploration highlights the community’s eagerness to integrate AI/ML into their strategies, albeit with varying degrees of success and accuracy. By stepping in to guide these discussions, P123 can leverage its upcoming AI/ML release to its full potential, ensuring optimal implementation and alignment with available features. Not limited by my abilities and understanding of what will be available in the AI/ML release.

As it is, many of the ML models I see are purely random when I first see them . Often, there are easy to make (and easy to correct) errors that make the model completely useless (e,g., wrong target).Member’s results might be even better with active engagement from all of the staff and members at P123.

The enthusiasm is palpable in ongoing discussions, as seen in another thread that delves into a myriad of papers, techniques, and factors for stock picking. These discussions are relevant to the upcoming AI/ML release on a hit or miss basis and could be further enriched by P123’s insights. Check it out here.

For marketing leverage , imagine utilizing all P123 factors in the core ranking system. As an individual member, the comprehensive analysis of these factors is beyond my reach due to cost constraints. However, P123 is in a prime position to conduct thorough k-fold cross-validation on a training set and then use an ou-of-sample test set on the core factors. This would not only validate the effectiveness of these factors but also refine the AI/ML features prior to launch, using the well-established Core Ranking System as a foundation.

This pre-launch analysis, rooted in fairness and rigor (absolutely no peeking into future test periods!), could significantly bolster P123’s toolkit and marketing edge.

And finally put and end to the question: is AI/ML better and/or easier. You could take control of that narrative.Such preemptive analysis, conducted with integrity and thoroughness, would greatly enhance P123’s toolkit and appeal in marketing narratives.

Considering the computational resources at our disposal, such a project could offer rapid insights and highlight P123’s unique strengths. Real feature available to P123 now could be highlighted in a rigorous and transparent manner. Adding to published literature that may or may not have something we can actually use at P123.

Moreover, inviting member participation in a competition would foster innovation and collective problem-solving. Providing access to masked features for a designated training period would allow members to test various strategies using P123’s advanced tools. Drawing inspiration from Kaggle, we could customize a contest that plays to P123’s strengths, rewarding engagement with API credits.

To maintain fairness, incorporating randomness into the feature set or using techniques like block bootstrapping would ensure a balanced competition. This method keeps the integrity of market regime timing intact while effectively concealing specific features.

This initiative would showcase of AI/ML’s potential. Doing this could refine our approach, unite our community,

Not to mention saving time on reading other people’s studies about things not available at P123.

If you want to you can give me access to a masked version of the Core Ranking Systems features so I do not know which features I am eliminating. You can review what I do over the weekend–with multivariate regression, a Sklearn boosting model and a random forest–and compare it to core classic out-of-sample results. I think you would want to share the results if you are interested in showcasing soon-to-be augmented abilities at .P123.

Better still, you can do it internally, including support vector machines, and neural-nets. Get Riccardo and Yuval involved for sure. But other staff including the Ai/ML expert and maybe some outside resources to make it complete and fair.

People are doing this now without the full resources, knowledge, experience and expertise available to P123, and getting answers that may not be the best answers for anyone. Maybe not working together to get those answers at times.

Eager to hear your thoughts!

Best regards,

Jim