I have been developing a small cap sim that over the years as the portfolio grows, I would need to purchase millions of shares of a small cap in order to stay consistent with my allocation targets. At what point does the sim become unrealistic and how do you address this issue as the portfolio grows?
I tend to view this more as a portfolio construction and implementation problem than a question of whether the sim itself becomes unrealistic. When a model is built, I think you generally have a good sense of the reasonable allocation and capacity constraints, which then gives you levers to manage growth—whether that’s rolling into more names, backfilling toward higher MDV, or simply capping that sleeve at a defined dollar level.
Some investors are more comfortable becoming highly concentrated in single, smaller names. That can work, but in my view, it requires significant additional thought around execution, liquidity, and trade management. For my purposes, I’d rather address capacity explicitly at the portfolio level than force the model beyond what I think are sensible implementation bounds.
Consider your Strategy as a index or benchmark and not something to be copied trade-for-trade.
The "capacity" concern is theoretical—it only becomes real if you're deploying serious capital. For your own account, simply create a Book with your actual starting capital and add the Strategy as an asset. The positions will scale automatically to match your account size.
@WalterW When you say Book, do you mean the Live Book feature? I haven’t used that before so I will have to research it.
Yes, Live Book.
A useful question might be: how many stocks does your ranking system work reasonably well with? If the honest answer is “not many,” that can sometimes point to overfitting or a model that’s too narrowly tuned to a small subset of names.
In my own testing, while I typically trade a fairly concentrated portfolio (15 stocks), the underlying ranking system appears to remain profitable across much larger portfolios — on the order of 100 stocks or more. When the time comes, liquidity can be addressed simply by holding more names rather than forcing larger positions into the same small set of stocks.
Before reaching that scale, another practical approach is to move toward percent-of-volume (POV) execution. In that framework, lower-liquidity names naturally receive partial fills while higher-liquidity names fill completely, which acts as an automatic liquidity adjustment without needing to redesign the model.
So my suggestion would be: first evaluate how robust your ranking system is as portfolio size expands. If it can’t scale beyond a very small number of holdings, it may still be useful — but it’s probably worth continuing to work on it with scalability in mind.
From an execution standpoint, POV or VWAP can go a long way toward managing capacity.
That’s how I’m looking at my own models for now.
Nice place to be!
Me too, BTW. 15 stocks seems better for my models (for both returns and manability), so I don’t want to go to 100. But I will still be ahead of Vanguard or mutual funds, I think. And maybe not so worried about money by then.
For me, the main takeaway — and really the reason I chimed in — is that you don’t necessarily have to keep reshaping the universe to deal with liquidity. Letting execution handle it via POV allows the model to express itself naturally. Lower-liquidity names self-throttle through partial fills, higher-liquidity names fill cleanly, and you avoid having to micromanage liquidity constraints upstream.
That framing removed a lot of future worry for me, so I figured it was worth sharing.
It is unrealistic at the point where your real position is a significant portion of the daily or perhaps more importantly for non algorithmic orders, hourly traded volume. If your account is still not that large to make a dent in that it is still reliable as far as liquidity impact goes. It is important to keep in mind past performance and future performance are two different things of course but liquidity should be ok