I’ve noticed more longtime P123 users using the AI/ML module. For example:
I haven’t seen a post from Marc Gerstein using the AI module (yet), but it’s clear that AI/ML is gaining real traction. There’s also a growing group of newer, quantitatively savvy members exploring AI/ML for investing.
@Marco has mentioned to me that adding AI has improved the business in ways he didn’t fully elaborate on. I’d be interested if he wanted to share more. My impression—though I could be wrong—is that enterprise clients may be noticing the platform’s more advanced modeling capabilities.
What I am sure of: I plan to keep using P123 long into the future. I believe AI and LLMs will be essential not only to individual success, but to P123’s long-term growth. And maybe long-term viability.
If I thought P123 could simply continue providing downloads (which i am most interested in continuing) and maintain the current AI features without any further growth or expansion, I wouldn’t feel the need to post this. But I don’t believe that’s the case for the retail offering at least. And I do get the impression the visibility of advanced AI in the retail portion has helped the enterprise business.
It certainly wouldn’t hurt anyone if P123 could build on this momentum. So here’s my question to the group:
What marketing ideas do people have to help showcase P123’s growing strength in AI and advanced quant?
Here are two random ideas to start:
1. Forum-based visibility.
One strength of P123 is that anyone can read staff posts in the forums—even non-members. @YuvalTaylor has done a great job showcasing what’s possible with P123 Classic. This likely benefits both the community and the business. I hope this continues.
It might help to have a P123 team member doing something similar for the AI/ML module—sharing examples, test results, and practical insights. Helping new members. Clearly this would be in addition to continued posts about the success of P123 Classic and would not be meant to replace that.
2. A model-building competition.
P123 already provides some prebuilt AI models, but none of them are perfect (in my opinion). What if we had a kind of internal, Kaggle-like competition—independent of feature choice—where members submit models (including hyperparameters), and we test which ones generalize best out-of-sample?
Maybe folks like @Algoman, @Pitmaster, Andreas, Yuval, Dan, Daniel, Feldy, Trendiest, ScifoSpace, Charles123, Chawasri, @azouz110, @InspectorSector, and others would participate. It could be a fun way to learn:
– Compare tree models vs linear models
– See how different users tune their models (including hyperparameters),
– Discuss performance vs interpretability
It wouldn’t require exposing any proprietary features—just duplicate successful features already in wide use and rename them (e.g., Masked_Feature_1, Masked_Feature_2).
I get the impression that @azouz110 regularly competes in competitions and could probably help with setting up a competition. I suspect there are P123 members who compete at Kaggle but do not post often. Maybe they would have some ideas for this post. To be clear I mean an internal competition that has nothing to do with Kaggle other than it is a competition (within P123).
Maybe this last is not practical—it’s just something I’m throwing out there without having researched the logistics. And maybe these are the two worst ideas anyone will suggest in this thread
But I’d love to see even better ones—ideas P123 could actually use to highlight its growing strength in AI and quant modeling.
Curious to hear what others think.