Hey Yuval! Mind if I ask you a question regarding one of your models?

I was looking up ideas on avoiding value traps on my own when I stumbled on an article and I realized it was written by you!

When serendipity of that sort happens I’ve learned through experience to pay closer attention. Given that we are both here on P123 it occurred to me that it presents the opportunity for me to more naturally ask you stuff directly about the model you presented in that article.

I tried to recreate it and some of the choices didn’t look obvious so I am curious about why you made and stuck with the choices you did. Would it be okay if I asked you more about it? In general is that something that you would welcome?

Sure, that’s absolutely OK!

1 Like

Great! The article and model I have in mind is the one you wrote in early 2019 about avoiding value traps where you combined a basic valuation ranking with 12 extra conditions. From what I can gather you arrived at those 12 extra conditions based on your idea of what would logically make sense. In another article about a different model you created, you detailed going through a process of trial and error testing something in the neighborhood of 100 factors making sure that individually each had a recent history of contributing to outperformance but if I am to understand correctly you didn’t do that with the avoiding value traps model?

Looking at my recreation of your model the backtested performance at the time you created the model looks decent but not particularly amazing. What gave you the confidence going forward it would look as good or even better or that it was even a worthwhile model to write about in the first place? Also there have been debates around here previously about whether it if is better to have a lot of factors or less in a ranking system. You added 12 additional conditions which might lead one to think you might be overoptimizing but it appears that hasn’t been the case in your model. Why did you think your additional conditions wouldn’t lead to overoptimization? I’m guessing it’s because your additional conditions for the system weren’t based on or suggested by a system backtest?

Thanks for your thoughts!

I’m assuming you’re talking about How To Avoid Value Traps | Seeking Alpha or How to Avoid Value Traps - invest(igations) (same article)?

In the second half of the article I explained the backtesting procedure I used to arrive at these 12 extra conditions, and you can read there that I did test about 100 factors. And at the end of the article I backtested a strategy based on the conditions and got a very high annualized return. I also explained that it’s best to use these conditions in a ranking system rather than screening stocks out. There definitely is some optimization going on here. Maybe backtesting in order to isolate these factors and then backtesting a system based on those factors is a good example of overoptimizing; maybe they should have been backtested on a different universe. At any rate, that article is four years old now . . . you might get very different results doing the same kind of test today . . .

1 Like

Somehow in rereading I missed the line that you also did that test on 100 factors for this model like you did in another model. All right so that is a standard part of your procedure in developing models.

Even though the model you presented is 4 years old now the multiple individual factor tests I’m more familiar with are those done by danp and denny from 17 years ago so it’s still a much needed update for me.