I have tried to look more into the publicly available simulations and DM models. I have probably looked at over 150 different models. There is a common thread that most of them produce **only half or less out of sample.**

**Does anyone have any thoughts on why this might be:**

** Is it overfitting, not real costs (slippage), timing luck, has the factor’s predictive ability been eliminated, too strong of a tilt towards a factor that has been out of favor, changes in the underlying universe, overweighting in a cap size or industry?*

I tried to look into some of the studies that have been conducted, but they do not provide any obvious answers as to whether they are transferable to DM models or publicly available simulations:

- “The Limits of Quantitative Modeling Techniques and the Future of Computational Trading” by Marcos Lopez de Prado, 2018 - The study discusses the limitations of backtesting in quantitative finance and highlights the importance of incorporating uncertainty into models. It argues that models that fail to account for uncertainty will produce inaccurate results.
- “Backtesting Trading Strategies with R” by Tim Trice, 2019 - The study examines the process of backtesting trading strategies using R and highlights the pitfalls of overfitting models to historical data. It recommends using out-of-sample testing to improve the accuracy of models.
- “Backtesting Investment Strategies with R” by Gero Weichert, 2018 - The study explores the challenges of backtesting investment strategies using R and discusses the importance of data quality and model assumptions. It recommends a thorough validation process to ensure the accuracy of models.
- “Backtesting and Simulation of High Frequency Trading Strategies” by D. Easley, M. Lopez de Prado, and M. O’Hara, 2012 - The study examines the difficulties of backtesting high-frequency trading strategies and highlights the importance of accurate market data. It recommends using more sophisticated market models to account for market dynamics.
- “The Risks of Backtesting” by Paul Barnes, 2018 - The study examines the risks associated with backtesting and argues that historical data is not always representative of today’s market conditions. It recommends incorporating economic intuition into models to improve accuracy.
- “The Pitfalls of Backtesting” by Richard Martin, 2017 - The study explores the limitations of backtesting and highlights the challenges of selecting appropriate historical data. It recommends using multiple datasets to validate models and account for market volatility.
- “The Dangers of Backtesting” by David Easley, 2017 - The study examines the risks associated with backtesting and argues that models can be overly optimistic due to overfitting. It recommends using robust statistical methods to account for uncertainty.
- “The Challenges of Backtesting” by Thomas Wiecki, 2018 - The study explores the challenges of backtesting and highlights the importance of data quality and model assumptions. It recommends using a variety of techniques to validate models and improve accuracy.
- “The Flaws of Backtesting” by Matthew Dixon and Kerem Tomak, 2018 - The study examines the flaws of backtesting and highlights the challenges of selecting appropriate historical data. It recommends using robust statistical methods to account for uncertainty and avoiding overfitting.
- “The Limitations of Backtesting” by Michael Dempster, 2018 - The study explores the limitations of backtesting and highlights the challenges of incorporating market dynamics into models. It recommends using more sophisticated models to account for changes in market conditions.