AI Factor launches tonight. Beta program ends at midnight

Dear All,

We're launching AI Factor tonight and ending the beta. Any tasks that complete after midnight tonight Sep 4 12:00AM, will be billed according to the worker used and the execution time. To run predictions, the Prediction add-on will be required. You will be able to activate it from the AI Factor dashboard after we release the non-beta site.

We have many upgrades coming, so let us know what you would like to see added. For now we'll loosely use the votes in this post as a guide AI PRE LAUNCH: Feedback Roundup

Thank you for participating in the beta!

PS. do let us know any reasons why you would not continue using it too!

  1. That's the Prediction function, right? The Prediction page remains no charge?

  2. What's the billing policy for cancelled/aborted/stuck training and validation jobs?

  3. I've been running training/validation recently. Where can I see what my charge would have been? Is the accumulated bill listed somewhere?

  4. AI billing credit roll-over, right?

  5. I'm using Predictors in a live model. What's going to happen if I don't delete that model and don't signup for AI?

I'll wait before using the AI tools. I would like to see reports on out-of-sample performance.

  1. Yes.
  2. Cancelled: you pay, Everything else: no charge.
  3. From Aug 1 till now it would have been $60 for all your AI workload. There will be a way to download the log
  4. AI usage is charged daily with a minimum of $50, or taken from the credit balance which never expires.
  5. The rebalance will give an error
1 Like

I would suggest that notice be placed on Account Settings -> AI Factor page.

The pricing does not seem very straightforward.

  • "Prediction page - Free." Who is it free for? Does it require the $100 add-on, or can I still access it if I stop paying the $100 once an AI predictor is created?
  • "1 Billed nightly, $50 minimum." Do we get billed when we accumulate $50 of usage, or do we get charged a minimum of $50 every day we use it?
  • To continue training and validating, do we need the $100 add-on?

I think anyone new to the platform would have an even harder time understanding what they will eventually pay to train and use an AI model. A few use case examples would probably help clarify what we/they might end up paying.

The ML tools that I actually use are not available now is the short answer. The below is more of a a marketing concern.

TL;DR: Most members posting in the forum about ML now have an EXTREME level of sophistication. Not sure an average new member could get over the learning curve with the present price structure.

You might consider separating-out something like simple linear regression that is fast with few memory requirements. Something that is not expensive for P123, and basically make it free. I assume a linear regression requires just the memory of the coefficients for predictions.

Having said that, not sure why it takes so long for a ridge regression to run on your severs.

People using ML now have had the opportunity to try the beta. New members will have to be highly committed and be quick learners for it to work for them. Mostly it won't work for newbies

You might let newbies dip their toes into the water of machine learning. Allow for a bit of a learning curve.

Remember it took some of us 10 years to get comfortable with ML and some long-time members still do not believe in ML even after the beta (I am guessing). They could not get it to work for them in other words.

I can remember discussing whether to use a t-test or a paired t-test in the forum when I first started. I was the first to post about bootstrapping in the forum and nobody got it at the time (including me really). It took a while to get ML to work for me, and I have more than the average amount of interest.

I can remember when whether to use cross-validation or not was a HUGE controversy.

I don't think I would have been a quick enough learner—with a good set of features ready to plug into ML--to get it to work for me when I first signed up. I doubt I could have been able to stick with it and make it work for me had I started 10 years ago with the present AI/ML offering.

Also, advanced users might just stick with the API and not pay $50 every time they want to run a regression. The level of experience as well as the amount they are investing has to be just right for this to be a cost-effective solution for a member.

Oh is it $50 each time I want to do a regression? Maybe that makes a difference. Still…. has to make sense economically for all involved and regressions have no memory requirements—other than the coefficients—for prediction. I would let newbies use that at a reduced cost.

FWIW. Just my take on marketing to newbies and how quickly you can realistically expect it to work for them.

Jim

1 Like

Congrats on the Go-Live to all P123 for the great work on the AI Factor!!!

Any plan yet on how to use AI Factor in Designer Model?

Very happy that the project was successfully lunched :rocket:

  • too expensive relative to my AUM
  • confusing/complex billing policy
  • more flexibility and transparency would be needed

General subjective feedback on ML models in stock selection: I am not yet convinced that more flexible (but stochastic and black-box) models would perform significantly better (in live trading) than deterministic linear models with advanced features/target transformations.
For linear models, I can plug my model into p123 classic infrastructure as it is.

2 Likes

I am on the fence about using it. I have gotten some promising results, but not enough so that I would spend $50 per day to try and refine it (the pricing is very confusing). I would pay $100 per month for predictions and the live port. I want the convenience of the integration with the portfolio management tools. But I am a little concerned about poisoning my test data by running too many train/validate cycles on the same data which is driving me towards my own code which has an inner cross validation loop.

I think I will most likely develop my system off of P123 using the download data so I can run all the hyper parameter training I want on my inner CV. Then once I am comfortable with it I will pay to train and run it on the P123 system.

2 Likes

Agreed as well. The 100$/month add on for predictions is too expensive, abd the $50 confusing.

1 Like

$50 min charge

When you enable AI you get a $50 credit which goes to your AI balance. Every night we run billing for your AI workloads and deduct from the AI balance. If the charge is greater than the balance, we bill the amount that is over, plus $50.

In other words you will only get billed when the balance goes below 0. Only if you use more than $50 everyday you get billed everyday.

$100 for predictions using the AIFActor() formula

We need some sort of recurring costs for using AI which took us three years of work. With training revenue alone we'll never break even.

Thanks for the feedback. We'll make it clearer

So $50 each day we use it after the 50$ credit? Do I finally have that correct?

So at some point $50 dollars on any day I use it for 3 minutes to run a regression?

Sorry I am having trouble understanding but that seems to be what you are saying and that is something I will have to consider for sure.

Okay maybe Claude 3 is smarter than me (if it is right): "you'll be charged the overage plus $50 to replenish your credit."

Features that I would like to see added.

  1. Categorical factors - These are very important for tree-based models.

  2. I would like to be able to export the model object so that I can visualize and inspect the results in python. Or create a python window to execute commands within p123.

  3. Inclusion of macro factors

  4. Grid search

  5. Ability to smooth the predictors to reduce turnover

  6. Ensemble models

I think the goal here is they want you to maintain a minimum balance of $0 and they will charge you $50 to replenish it once you go negative. The alternative would be to bill you small amounts everyday according to usage which would be annoying.

1 Like

With access to all of the modules and libraries in just Sklearn I would sign up. Maybe same as this:

My present workflow cannot be done within P123 and I would not know when to expect that it could be, unless I could just do it myself..

The main problem is that people can't be sure that this will be significantly better than their original model, otherwise $100 wouldn't be a problem.

1 Like

Good point. Perhaps we discount it for the first year. Thanks

We are working on Categorical, Macro, Grid.

The Macro factor is tricky. We re-read the Gu paper and it seems that they are multiplying every feature by the normalized macro factor. It's also not clear what normalization they are using. What we are doing now, treating macro factor like any other feature, is completely wrong (wish we never added macros to predefined features). If anyone has suggestions how to treat macros please let us know.

Ensemble models is just adding multiple AIFactors to a ranking system, no? Or do you mean testing ensemble models within a cross validation?

Not sure what this is "Ability to smooth the predictors to reduce turnover"

Thanks

Ensemble models: Would be blending multiple AIFactors. It would be nice to see the results through cross validation workflow.

Smoothing predictors: Ability to take time series averages of predictors. It cuts turnover substantially without much signal loss.