AI Factor - Designer Model?

Any thoughts yet on how we will be able to sell an AI-Factor based Designer model? (i.e. how will you deal with the traditional 10y of backtest for ranking system? With AI Machine learning it is very different...

Predictors will give errors if asked to predict using training data.

You will be able to backtest with a new "point in time" predictor we are working on. It will automatically choose the right predictor for the point in time date.

hi Marco,
Any updates for this " new point in time predictor ". As i can still see this error:

Or let me know if something missing for my predictor/simulation?
thanks

1 Like

I thought we disabled the requirement that a designer model requires 10Y of backtest data. I changed it to three years minimum. Will that work?

It's working now. Thank you

Wating for AI-Factor based Designer models ... in meantime ...

Maybe some of you can share performance of your AI-Factor models during recent mini crash ? Which type models worked best ? What additional features did you used (sectors, macro) ? How the AI results compare to your ranking based models ?

I'm still working on my local-based ml model. But somehow my ranking based models (with quite complex features definitions and transformations) provide so solid performance (in- and out-of sample) that I'm not yet convinced to switch to a black box. Tool little performance improvement vs sacrificing interpretability. But maybe I should work harder ...

2 Likes

We're brainstorming bringing cross-validation and weight optimization to classic ranking system.

The current way to develop a ranking system is brute force. You set the weights, run some tests, and keep iterating. It's a 100% curve fitting process basically. The new interface we released helps a bit by giving you a split view of 1st half vs 2nd half . BUt still, you are using in-sample data to make factor and weight decisions.

Leveraging the existing ML framework by using some linear ML algos to optimize the weights, and cross validate them with K-fold CV-blocked, shouldn't bee too hard. And would be a lot simpler to use than AI factor , with very few settings to play with.

4 Likes

Just for my understanding, the ML as of now are not already just optimizing the weights?

Right, that's what training does. Optimizes the model's feature importance (weights) to reduce prediction errors during the in-sample periods. But, crucially, the model is evaluated out of sample in the "folds"

1 Like

So I am not getting what you are suggesting to add.

Thrilled to hear cross-validated weight optimisation is coming to classic rankings – this feature alone has me ready to upgrade to the Ultimate plan with AI! :rocket:

Two small add-ons would make it even more compelling (apologies if they’re already in place; I don’t have AI access yet):

  1. Non-negative & “anchor” weight controls
    A simple per-factor toggle to:
    a. Non-neg – keep a weight ≥ 0 so a factor can only help, never hurt
    b. Anchor – freeze a proven factor at its current weight
  2. Weight-stability diagnostic
    A simple chart that shows how each weight moves across CV folds. If a line flips sign or jumps, we know the model is fragile before deploying it.

If these features are already available, my apologies – I’m not yet on Ultimate. Just let me know, and I’ll be happy to beta-test and jump on the AI bandwagon. :steam_locomotive:

1 Like

That works. That is how this was done (to the letter) with downloads, I funded it for a while (starting 10.1.24 as shown in the ranking system date) and stopped only because it was highly correlation to one of my other ports. I did not need both (this and the correlated port that continues to do well out-of-sample):

This is simple enough that it could be provided to members at lower membership levels if you wanted to increase overall membership. Probably less net computer resources would be used than constant manual optimization does now. You would probably save money as far as computer resources are concerned. Linear models run fast and no need for more complex predictions methods (just use P123 classic's ranking system at rebalance).

Even if you used just fast, simple multivariate regression for some of the membership levels you could attract members. Maybe charge for Ridge, LASSO or Elastic-net computer time but a lot of computer time would not be needed for most members and almost certainly less than manual optimization uses now.

In other words with no hyperparamter optimization, this method of optimization would be a single run vs multiple never-ending runs with manual optimization as it is done now.

A single run and weight optimization is done. Cross-validated while you are at it. Members will try different features but that is part of manual optimization now—along with adjusting weights being needed with manual optimization.

1 Like

I like this idea. Because suppose I am a value investor and I want to design a model 'my way' with 50% value weighting. Then the other 50% weighting can be a mix of other factors based on custom ML weighting. A hybrid between my own ranking system and ML. Or at least I lock down my own style and strategy while giving some additional weight to ML factors.

Furthermore, the AI Factor could use a trailing 5 year (or whatever specified) window as the training period for next set of predictions. So my value hybrid strategy would be value momentum one year and value sentiment the next or GARP depending on market conditions.

By the way, linear models are just one way to optimize P123 classic ranking system weights. There are clear advantages to non-linear methods for finding weights —particularly those that allow for interactions between features (which linear models can’t capture), or that directly optimize for simulation returns rather than something like mean squared error. These can be meaningful improvements.

Here’s a method Yuval found interesting in a previous discussion (with code even), though I ultimately can’t recommend it—it just didn’t work well in my hands:

:link: Genetic Algorithm to Replace Manual Optimization in P123 Classic

In the thread @korr123 said he had used a genetic algorithm to automate and optimize P123 classic ranking system weights already.

That said, there are other paths worth exploring. I could share one or two of my approaches—under contract or for a price— as I just can't give away my hard work for free.

Also—and perhaps this is most important--anything posted in the forum is most likely to be used outside of P123 by retail traders who are not members or even P123 competitors before it becomes a P123 feature (if it ever does become a P123 feature).

But I also suspect you’d come up with equally effective, creative solutions independent of my methods, if you keep digging into this.

Also, I’m sure Azouz, Pitmaster, Korr, and others have some ideas on this topic. It’s a pretty obvious thing to do and I am sure I am not the first P123 member to explore this: As you (and Korr) ultimately did. What you are suggesting will work, I believe. I do not mean to diminish its effectiveness by introducing the possibility of other (additional) methods.

No, the features you suggest are not part of AI factor since there are a multitude of models, and not all support those features. Adding more model-dependent features requires careful UX considerations and AI factor is a version 1 release. Also, other more important things are missing like feature engineering tools.

FYI, we have enabled AI Factor for some of the legacy membership, and for the Portfolio membership. First time users also get a $50 credit for training. We'll formally announce it soon. Take a look, let me know what you think.

The ranking system CV optimizer would be much simple than AI Factor. Not sure yet how much simpler. Probably something like this

  • Few target options
  • None of the complicated, per feature normalization
  • No predictors
  • No hyper-parameter tuning

Thanks for the suggestions. We'll be discussing this for sure in our upcoming retreat.

1 Like

For discussion at your meeting:

I’ve tried nearly every ML method imaginable—and honestly, this is probably the one I’d use today. Here’s why something this simple might be more powerful than it first appears:

:white_check_mark: I just need excess returns.

:white_check_mark: I just need ranks.

:white_check_mark: I just need to rebalance using P123 Classic .

:white_check_mark: One grid search is enough—I usually settle on stable parameters early with no further retesting.

—

Proposal:

  • Allow users access to Elastic Net (with user-defined alpha and l1_ratio; no grid search required).
  • One run → one set of coefficients → usable as ranking weights .
  • Bonus: Elastic Net naturally performs feature selection.

—

This isn’t just simple—it’s effective. Elastic Net–derived weights can perform very well in P123 Classic. It’s easy to test and verify.

—

Forum engagement & marketing value

@hemmerling noted that some AI discussions can get overly technical (he’s right). But this idea is intuitive and powerful.

Linear regression is technically machine learning—but most users learned it in high school, complete with a line and a graph. That’s approachable, memorable, and teachable.

This could drive:

  • More practical, high-quality discussions in the forum (free marketing)
  • New member growth through a feature that feels advanced but is easy to use

—

Who it appeals to:

  • Users who dislike “AI” but are comfortable with linear regression
  • Advanced users who know Elastic Net works (and why)
  • Anyone who wants something fast, understandable, and that just works

Goditi il ritiro, ma non bere troppo vino italiano!

1 Like

I didn't follow the entire discussion, but be careful of making it too simple where everyone will be using the same exact model. That's not a formula for success.

1 Like

I wish it could become so simple that many more willing to put a little effort can beat the greedy hedge funds and manipulators. All their algorithms do is steal.

1 Like

I believe we have our first AI DM model:
https://www.portfolio123.com/app/r2g/summary?id=1822627

Good luck, Andreas!

It will be interesting to compare how AI DM models perform against old-school ranking systems.

2 Likes

This looks interesting. And it would be great if we would see the weights so that we can tweak it or mix various systems.

I am also wondering in the current ML system, in Portfolio Results, we see the fold results per annum/period, as well as a turnover. How does it work, what does that mean? Let's say that we have a 3m target. It does not seem to be that it rolls over every 3 months. Asked differently, how would this be modeled in a SIM?