Optimal position sizing for a VIX ETN strategy?

@Jim,

I think your Kelly formula has an error. The Kelly formula would be:

(probwin*(wingain/lossloss+1)-1)/(wingain/lossgain)

I don’t think it will ever recommend investing over 100%. (Although it can have large negative numbers). A simple way to check your formula is to put in a 50% win/loss rate and 1:1 odds (same win gain as avg. loss). This should give Kelly C or 0.

@Chipper…I think allocating to vix strategies (and many P123 strat’s) based on simulation based numbers is not likely to be effective. The numbers are simulation based with huge number of input parameters, many iterations tried and frequently optimized outcomes.

Nearly all forms of asset allocation (unless highly constrained) have huge ranges of outcomes in suggested allocation weights based on fairly minor changes in input statistics.

So…maybe it’s fine to use this Criteria as a starting point. But…as with all other asset allocation based optimization problems, I’d suggest using min. and max. top down based ‘user view imposed’ constraints. Using this formula (or any formula…including Bayesian views…is really, it seems to me, just another form of mean variance optimization).

Also…in this case it’s fairly unlikely that any systematic trading system bets are independent. It may also not be the case that you get a large number of bets. Especially on VIX (and if initial asset allocation is too high). And ETN’s linked to VIX. One large trader trading this…one bad news item about Barclay’s…one prolonged ‘Event’ that alters VIX from it’s historical averages…all will be non independent events that alter the systems ‘input’ variables…and change it’s Kelly and allocation weights.

So…If you do want to use it, I’d suggest a) testing a range of inputs on the critical variables and doing a simple monte carlo simulation on it and b) hard constraints on allocation and c) investing only a fraction of the kelly suggested weight.

The above tests will show you sensitivity to win rate and gain/loss ratios. And will give you a sense of the conditions under which the model fails - and the likely limits of confidence (if any).

I’d then suggest beginning investment in year 1 at half the ‘hard constraint’ level you think you are comfortable with.

Here’s one paper on using Kelly Criteria in the multi-asset allocation process:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2259133

Best,
Tom

@Jim…your specific numbers (57% win rate, 14% gain/8% loss) suggest a Kelly Criteria of .32. This number suggests ‘betting’ 32% of your portfolio on a single bet IF only one strategy is available (and a large number of bets are available and the bets are non-dependent).

This number doesn’t tell you anything about how much cash you would allocate to that strategy among others. That’s a much more complex multi-asset, multivariate optimization based problem.

The most naive and simple way to approach this would be to divide the problem into two steps:

  1. STEP 1. Set your asset allocations using whatever methods you are comfortable with (equal weight, equal risk, mean-variance with constraints, etc).
  2. STEP 2. Adjust internal strategy allocation weights within these bands based on the things like Kelly criteria, AR%, Sortino and correlations…(rolling and peak).

Investors also have to look at income needs from their portfolios. So…many methods of asset allocation are likely best used and then the results blended using ‘common sense.’

Best,
Tom

@Tom, I don’t use leverage either. I think if you plug those numbers into the Kelly calculator link that Chaim provided that you will find they are correct.

My only point was that to to use Kelly criteria you have very aware of your assumptions. Do you use average win and average loss? If you use 100% loss then what do you use as your gain? My only other point was that what Chaim did was pretty good.

Of course, the most important (not true) assumption is that the trades are independent. Chaim, tries to get around this by assuming the volatility from year to year is independent but is accurately critical of this assumption for individual stocks. Not a perfect assumption but perhaps good enough for his uses.

I stand by this concern about the assumptions. What do you recommend using?

BTW, different assumption can (and do) lead to different formulas (or at least different definitions of the variables) and dramatically different Kelly % recommendations, which is again my point. Without looking in detail, I assume all of your math is correct. Your math has been perfect in the past.

Jim,

thanks for the kind words I didn’t mean to criticize your conclusions; just making sure that we are on the same page. Thanks also for pointing out the issue of auto-correlation. I am thinking about how to deal with it.

EDIT:
Okay, I checked my volatility trades for auto-correlation and got a R Squared of 0.04% (which is close to zero). Therefore in this case at least I don’t have to think about auto-correlation. Whew! It’s complicated enough without it.

@All,

This thread produced a lot of thoughtful comments, and I think that we can all agree that the Kelly formula certainly has it’s limits when it comes to investing if it can be used at all.

So, going back to the subject of this thread, what would be a better tool for position sizing an ETN like XIV? I can use a rule of thumb like 5%. But owning XIV is like selling insurance. I wonder how insurance companies go about these things. Insurance companies also have correlated risks. For example, if there is a hurricane in Florida then it doesn’t damage just one house. So they must have some method of risk control, don’t they? Why reinvent the wheel?

Chipper,

Selling VIX is 1 single risk…it’s selling volatility on SP500 futures with near term maturation. It’s one individual bet.

That’s like selling one insurance policy on one person…or selling only insurance on a retail office building in Florida. How much do you want to bet on any one policy?

Insurance companies rely on large amounts of historical data and statistics and diversification. They have intensive analytics and risk management. I am not an expert, but did take classes on this back in school.

That’s why most companies diversify widely around a) geographical concentration, b) types of policies written and c) numbers of policies written.

Most companies stick to insuring things where the risks can be quantified (i.e. mortality tables…or fire and property damage risks).

Predicting the rate and duration of external shocks and/or VIX spikes…is not something that likely can be done very well. Predicting the failure rate of ETN’s focused on VIX is also hard. And predicting investor reactions to ETN asset value changes is additionally risky.

So…Why do insurance companies fail? The biggest reason is failure in underwriting. The second biggest is insufficient loss reserves. See:
http://www.iso.com/Research-and-Analyses/ISO-Review/Challlenging-Times-Increase-Need-to-Manage-Underwriting-Risk.html

From the paper:

[i]To compete intelligently in an increasingly challenging environment, insurers must understand the true cost of the individual exposures they underwrite, and they must manage their overall underwriting risk. The key to understanding the true cost of individual exposures is the application of advanced analytics to large volumes of highly granular data, enabling each exposure to be underwritten and priced based on its unique characteristics. The key to managing overall underwriting risk is measuring uncertainty about the frequency and severity of insured losses.

Quantifying the uncertainty in insured losses entails fitting loss distribution models to actual frequency and severity data, with the process yielding statistical measures of the variation in frequency and severity around expected results. Quantifying overall underwriting risk also requires information about how results for different classes, lines, and/or profit centers are correlated, so that the loss distributions for each element of an insurer’s operation can be melded together in an aggregate loss distribution that correctly measures uncertainty in results for the entity as a whole.

Once an insurer understands the uncertainty in its underwriting results, it can use that information to guide crucial pricing, underwriting, reinsurance, and capital-management decisions. For example, to determine how much capital to hold in case actual results prove worse than expected results, an insurer could express its tolerance for risk quantitatively (e.g., annual probability of insolvency less than 1 percent) and then use its aggregate loss distribution to solve for the amount of capital necessary to achieve a given level of security. Thus, the insurer could manage its capital to achieve adequate security while avoiding unnecessary costs associated with excess capital.
[/i]

So…I suggest there is not enough data available on trading ETN’s based on VIX to really do the above steps well. There is also not enough data on correlation patterns across event type. And…selling (or buying) VIX is just one possible arrow in a balanced portfolio. And a simple small allocation ‘test’ is still likely warranted.

@Jim. I think the on-line Kelly fraction calculator is not following the standard formula:
Kelly % = W – [(1 – W) / R]

The above formula doesn’t go above 1.

Best,
Tom

Tom, insurance companies deal with uncertainties in the data all the time. Some companies sell insurance for events that have no historical data at all. How do they manage risk with limited data?

Tom, I think you make an important point. If you are going to make a Kelly bet based on the annual returns then one should place the bet at the first of the year and not add to it. One could then place another bet at the beginning of the next year. That way your losses are limited (at least for that year).

No one else said this, but I have the tendency of thinking I would place 46% in XIV and keep 46% of my entire portfolio in XIV as it goes up or down. This is quite different and would be the road to ruin.

I said it before and I will say it again. I wouldn’t go with 1/2 Kelly but I like what Chaim has done. XIV would have to change a lot for this not to work.

Chipper - Lloyds of London offloads the insurance risk onto investors. Several years ago several investors in Alberta ended up owing millions on insurance claims through Lloyds. Apparently they were making a good interest rate on their investment but didn’t understand they had unlimited liability from insurance claims.

Steve

[quote]
The point is, if you consider a strategy (or asset) A and a strategy B which is strategy A leveraged N times, Kelly criterion give the same recommended allocation for both: W (probability) is the same, and R (avg win/avg loss) is the same. example: if you take a daily bet on SPY and a daily bet on UPRO, you get the same allocation for both games, nevertheless the risks are very different. Maybe there’s a mistake in my reasoning, but where is it?
[/quote]@Fred, good point. Could it be that the Kelly formula assumes that the maximum loss is -100%? There seems to be some discussion about this on Wikipedia.

As a former actuary, I feel compelled to say that there is no free lunch, you know for sure someone has the risk. The insurance company merely tries to manage it and mitigate so they can stay in business and make a profit. When you have limited data, you either don’t offer the product, or you increase your prices so that over many of these one-off situations so that your reserves can cover it. But it doesn’t mean you can’t go out of business or your policy holders and/or investors won’t be SOL.

And just because XIV has never gone to zero, doesn’t mean it won’t. Most likely would have during 1987.

TD,

Thanks for recommending The Mathematics of Money Management. It is already entertaining. I’m not far enough to know if it will make me a better investor. As I understand it the book it promises to make some of this discussion more practical. Hope so.

Jim, yes, I absolutely agree. It is essential people understand Kelly and probably more importantly Vince’s Optimal f, even if you would never entertain them in a real portfolio. Just as if one didn’t believe that markets are totally efficient, they’d sure as heck better understand efficient markets theory. Understanding how events might play out in a perfect world certainly helps you even in those circumstances where they are less than perfect.

Tom C

Chipper,

I’ve been trying to help. Sorry if I haven’t. I think, perhaps, you are trying to reinvent the wheel around ‘multi asset’ portfolio optimization with simulation based inputs in which there is a high degree of parameter uncertainty. This is a fairly well studied field going back to, at least, Markowitz in the 1950’s or so.

See:

  1. http://ccfr.org.cn/cicf2005/paper/20050116040506.PDF
  2. http://www.blacklitterman.org/index.html

I don’t think the correct comparison is insurance companies. I think it’s better for you to look at professional ‘multi strategy’ asset allocators. They face this problem every day. You are free to disagree with me.

The basic approaches taken are:

  1. Naive approach of equal weights to diverse strategy baskets (this ‘naive’ approach is still used by some top endowments and hedge funds…they may set a limit of 2% to a managers and then 10-15% to an asset class based on the quality of managers they find).
  2. Risk balanced weights (Risk parity).
  3. Mean variance based optimization (or CVAR type optimization).
  4. Black-Litterman models…
  5. Or more advanced Baysian based methods (like the attached above paper link).

I have tried all of these. When there is a high degree of uncertainty around parameters (i.e. mean returns and/or correlations)…these models either a) recommend very low weights (most of the more modern methods) or b) have huge weight swings.

The above links and papers will show you how the professionals in this field use math and programming to do what you are trying to. And have links to many more papers.

However, looking at multi-asset managers, most have very small to zero allocations to options selling. They have answered this question (selling Vol. as an asset class) and said their optimal weights are zero (or typically well under 5%). When they allocate they tend to choose a basket of top managers with 5-10 years of out-of-sample results (often wanting to see risk management through 2 big down markets).

If you are interested, there is also a Barclay’s ETN (VQT)…that you could look at - they dynamically adjust exposure to VIX between 2.5% or so and 40 something %. You can look at their holdings at any point in time to see what they think is the optimal weight…but, from what I know, this is a very extreme approach.

Good luck.
Best,
Tom

Chaim,

Thanks for the information on auto-correlation. Can you give any more information an your techniques? I.e., did you run a regression of the VIX on the Y-axis and the previous month on the X-axis?

Chipper,

Here’s some interesting links to free reading on the ‘money management’ work Tom C. was mentioning above:

  1. (Not math, concept overview): http://www.automated-trading-system.com/wp-content/uploads/2010/03/Vince-LeverageSpaceModel.pdf
  2. Longer text here: http://www.forexhug.com/ebooks/MathematicsMoneyManagement.pdf
  3. Code in R for implementing this…and some testing results: http://www.r-bloggers.com/the-leverage-space-trading-model/
  4. file:///Users/tomaustin/Downloads/Ralph%20Vince%20-%20Leverage%20Space.pdf

It’s interesting reading.

However…all of these ‘money management’ systems above remain very sensitive to the accuracy of the model ‘inputs’. In this case, these are all coming from our simulations. The inputs from the simulations have huge built in ranges of potential outcomes and inherent biases / mistatements around things like a) win rate and b) avg. gain per win. Many of these key inputs have been optimized. And (if using them, correlations…and distributions of underlying data for moving forward outcomes).

These guys write on this (and sell a software package to do…I think):
http://www.stator-afm.com/optimising-the-position-size/

One of the things they do…and that’s recommended is ‘randomization’ through monte carlo analysis of the historical backtest returns…so can see what would happen if the historically observed trades happened in different sequential orders. That would be a great feature on P123.

Best,
Tom

Jim, I tested auto-correlation of my strategy; not of the VIX itself. I simply used the Excel function Correl() to compare period x to period x-1 of my realized trades for my volatility model backtest. The VIX itself certainly has some auto-correlation tendencies which my model takes advantage of.

Tom, thanks. Most of the work in the portfolio optimization space was done on asset classes where the word “risk” is used interchangeable with volatility. Anyone who owned mortgage backed securities in 2008 found out that risk and volatility are two different creatures. You can often measure volatility historically but the risk of being wiped out is something that can often never be seen in backtests. I assume that most if not all of the portfolio optimization strategies that you mentioned aim to minimize volatility or to minimize the Sharpe or equivalent. That’s fine as far as it goes. But buying the XIV ETN has the additional ‘risk’ of going to zero that almost no other asset class has because if the VIX spikes by 80% or more within one day then XIV will terminate. The VIX has spiked by far more on Black Monday and I have to assume that it will happen again at some point in the future. Therefore it seems prudent to add additional measures of safety such as position sizing to limit the maximum loss, and only buying XIV when the ‘insurance premium’ is good enough to compensate for the risk (as Tom C pointed out). My question is if the maximum position size for the wipeout risk can be mathematically estimated or should I use a rule of thumb such as 5%. Rules of thumb are generally convenient and often approximately right but sometimes they can be totally off.

Chaim, thanks.

Thanks Tom. I was getting tired of Kelly yesterday when you posted. This is better at least: copied and pasted from investopedia. The first minus sign got deleted in my original post, I think.

Kelly % = W - [(1 – W) / R]

Where:
W = Winning probability
R = Win/loss ratio

I do think there are other correct formulas depending on whether you assume there is a difference in the amount traded and the amount risked. For example, one might assume they can’t lose 100% in US treasuries. An investor will trade more than than their potential loss. So, someone may have a different formula.