Or does Port123 staff envision a ranking system that utilizes several combined
predictors
Limiting the feature weight would really be the domain of the underlying machine learning algorithm, unless you were to do some manual postprocessing of the model parameters after it's been fit.
If you're doing a linear regression, p123 exposes both lasso and ridge regression. Lasso uses L1 regularization to penalize the model on the sum of the absolute coefficients, so it can help steer the model towards zero coefficients for sparsity in your model. Ridge uses L2 regularization which penalizes the model on the sum of the square of coefficients, so it can avoid too large of a coefficient on any one feature.
If you're worried about changing factor regime, I think having the model periodically refit would be good strategy, and that's why I'm hoping we can eventually get that support in p123.
I'm assuming you're referring to combining several AI predictors, and I absolutely plan on testing this. It's referred to stacking in the machine learning world and is a common technique for improving performance.