The AI Factor provides the ability to set the target using custom formulas, but is there a way to view the actual predictions rather than just the normalized values? From my understanding, the lift chart (with download) only shows normalized predictions. The portfolio output shows the returns of equal-weighted positions within percentiles. The returns report shows bucket performance.
None of these seem to display the actual predicted value versus the true target value. I believe this only happens when you use predefined returns and view them in total return or vs predefined benchmark.
Is there a way to access this? Any guidance would be appreciated.
I'm not sure I understand the question, but if you want to see the actual predictions, set Save Validation Predictions = Yes when you run the validation. Then you can go to Validation, Models and click on the '3 dots' button for the model and choose Download Predictions.
Funny enough it actually took quite a while, about 10 minutes, for the output to appear, and then several results followed more quickly. Just a heads-up for other users, since things here usually run fast and I wasn’t expecting such a delay without notification.
That said, I’m not entirely sure what I’m looking at. A few quick checks seem that the target shown is the normalized objective (not the actual values). The pred column also appears to be normalized.
It's not clear what perf represents. It looks like some form of return, but it’s not obvious how it’s calculated. Could you specify?
Where can I see are the actual values alongside their corresponding predictions. For example, if I’m predicting sector-adjusted return or beta adjusted returns, I want to see both the raw target value and the raw prediction (not just normalized values). The system is computing them but then dropping them when it comes to returns and portfolios. I only see boilerplate stuff (universe / benchmark / total returns) or portoflios total returns with equal weights. If my target was beta adjusted, I want to see the portfolios that have beta hedged just like my target. Does this help explain better?
It's all apples to oranges this way.
If normalization is applied, it would be helpful to have the results remapped back to the original scale.
I fully understand and appreciate the process of normalization (I use it all the time), but a z-score by itself doesn’t convey the actual dispersion of the values. That dispersion is critical because even if a model predicts direction well, it’s the magnitude of the difference between top and bottom percentiles that determines whether the signal is actionable in the marketplace.
Did I answer your question? In short: I'm looking for the actual target values (whatever the formula we use) and the predictions of those values (the predictions of the formula)
I normalize the values so the AI Factor can properly model it but at the end of the day, I want to see the real world re-mapped values.
An example is if I model beta adjusted returns (not the canned returns taking the simple difference). How can I see those beta adjusted returns in the other sections (or in any other way)?
When you "download predictions" from the Validation, Models page the output has target, perf and pred columns. target = Target is normalized using the Normalization settings. An example of how to replicate the value in the download is ZScore("Future%Chg_D(65)",#All, 5, 0, 3.5). perf = the future returns where the value used for the days parameter is based on the Frequency setting in your AIF. For example, if Frequency is 4 weeks and the Max Return setting in the AIF is 200, then it is UBound(Future%Chg_D(20), 200). pred = the predictor is not directly normalized, but it is based on the normalized Target.
The non-normalized target values are currently not available to download from within the AIFactor pages. You could use the Download Factors function to get the raw data for the formula as long as it is only price data. Non-price data requires a data license.
But there is no way to generate predictions using non-normalized targets. I suggest you create a Feature Request and include as much information as possible so it can be considered in the next round of AI Factor enhancements.
But nothing allows us to see those predictions over time like we would returns.
It renders this section virtually useless. Yes, we get a lift charge with normalized data, but that's just part of what's needed to understand and make use of the output.
But it seems silly to offer us the ability to model anything other than those canned returns since we can't analyze it any further than the lift chart dload. I'd love to hear how others are using this for anything other than those predefined returns.
Unfortunately, this won't rank high on the list of feature requests.