Hit Ratio or other ways to get the quality of the prediction

Hi

I was looking do diagnose the quality of the prediction.
In a first step, I'm comparing the prediction (y_pred) vs. y_test with a ratio.

Here is a list of estimators:

# 1. Mean Squared Error (MSE)
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error (MSE): { mse:.3}")

# Root Mean Squared Error (RMSE)
rmse = mean_squared_error(y_test, y_pred, squared=False)
print(f"Root Mean Squared Error (RMSE): { rmse:.3}")

# 2. Mean Absolute Error (MAE)
mae = mean_absolute_error(y_test, y_pred)
print(f"Mean Absolute Error (MAE): { mae:.3}")

# 3. R-squared (R^2)
r2 = r2_score(y_test, y_pred)
print(f"R-squared (R^2):  {r2:.3}")

# 4. Pearson Correlation Coefficient
pearson_corr, _ = pearsonr(y_test, y_pred)
print(f"Pearson Correlation Coefficient: { pearson_corr:.3} ")

# 5. Spearman Rank Correlation
spearman_corr, _ = spearmanr(y_test, y_pred)
print(f"Spearman Rank Correlation: { spearman_corr:.3} ")
#spearman_corr = 0

# 6. Hit Ratio or Accuracy
def calculate_hit_ratio(y_test, y_pred, n_top_stocks=100):
    # Assuming y_test and y_pred have a multi-level index with 'ticker' and 'date'
    hit_ratios = []
    
    # Iterate over each unique date
    for date in y_test.index.get_level_values('date').unique():
        # Get the top N stocks based on predicted returns for the current date
        top_predicted_stocks = y_pred.loc[(slice(None), date)].nlargest(n_top_stocks).index.get_level_values('ticker').tolist()
        
        # Get the top N stocks based on actual returns for the current date
        top_actual_stocks = y_test.loc[(slice(None), date)].nlargest(n_top_stocks).index.get_level_values('ticker').tolist()
        
        # Calculate the hit ratio for the current date
        hits = len(set(top_predicted_stocks) & set(top_actual_stocks))
        hit_ratio = hits / n_top_stocks
        #print(hit_ratio)
        hit_ratios.append(hit_ratio)
    
    # Return the average hit ratio across all dates
    return sum(hit_ratios) / len(hit_ratios)
    
hit_ratio = calculate_hit_ratio(y_test, y_pred, n_top_stocks=100)
print(f"Hit Ratio: { hit_ratio:.3}")

# 7. Information Coefficient (IC)
def calculate_ic(y_test, y_pred):
    # Calculate the IC for each date
    ic_scores = y_test.groupby(level='date').apply(lambda x: pearsonr(x, y_pred.loc[x.index])[0])
    #ic_scores = y_test.groupby(level='date').apply(lambda x: spearmanr(x, y_pred.loc[x.index])[0])
    
    # Return the mean IC score
    return ic_scores.mean()

ic = calculate_ic(y_test, y_pred)
print(f"Information Coefficient (IC): { ic:.3} ")

The most promising and intuitive one was Hit Ratio:
Basically I'm checking every day, which stocks I should have selected as the top stocks from my predictor (in this case largest returns) and compare it with the top stocks from y_test.

It looks like the Hit Ratio scales nicely with the Pearson and Spearman Rank Correlation, and as well with IC - at least most of the time (70%)

Just for information, I'm hanging around with a hit ratio of 7% using 100 stocks.

Does someone have so other suggestion how to access the quality of the prediction? (Just looking at return, I believe does not help.)

1 Like

You are making this a classification problem and using a classification metric (accuracy) Here are a couple of other metrics for classification problems you can look at:

In this link @marco discusses some of the advantages of Matthew's Correlation coefficient

F1_score is similar.

2 Likes

@Jrinne it was not the intention to make it a classification problem. If I use the usual method, I think it's optimized for the whole universe. in this case I'm optimizing for the top ranked predictions. might be as well better to compare the medium of the top 20 y_test vs y_pred.
I checked as well the predictions based on the train data, its only around 30% of correct selected stocks out of 100...probably need to improve the hyper parameter as well.
I just want to find a better measurement than the final return or Sharpe ration of the strategy

best
Carsten

1 Like