Reasons for 'Big Gap' in results between a ranking system and a simulated strategy

To make this thread a bit more concrete I’m trying to come up with reasons why the annualized returns of a Simulated Strategy (SS) might differ more than 10% negatively from the ‘Top bucket’ of a Ranking System (RS) that is used as a component of that same SS, when using the same Universe.

Other threads on this topic:

Some reasons I came up with out of the top of my head:
(a) High Turnover strategies can result in a big gap between a RS and the SS when not accounting for slippage in a RS. So lower turnover strategies should have a smaller chance of having a ‘big gap’ in annual returns.
(b) The RS is not as robust in the top buckets. This should show up when using more than 20 buckets (e.g. 100 buckets). It could be the case that the Top 5% of stocks are made up of, for example 2.5% stocks that outperformed greatly and 2.5% of stocks that actually did poorly. I think this is actually quite unlikely, but it can happen.
(c) The use of Buying / Selling rules within a SS that do not give enough or not consistent enough exposure to the top ranked stocks in the underlying RS.
(d) Too little stocks held (e.g. < 10) increasing the risk of a bad pick in the SS. A RS only takes into account quantitative measures. If the CEO happens to go to jail (information not encapsulated in the RS) that will likely affect the stockprice.
(e) Different settings in the SS or settings in the SS that overwrite the settings in the RS, like the chosen currency (unlikely to explain 10%+ return diferences in most cases) or overwriting NA’s negative to neutral or the opposite way around.

For the case I described in this post (a) doesn’t seem likely as the gap in returns did not happen for the US Universe, only for European data. I’m going to check out (b) and (e) and report back.

Interested to hear if others have (other) ideas about reasons for big differences in results between a RS and a SS.