My SP500 Model

Also look at adding an inverse ETF like SH. You may not want to invest like that but it’s a nice exercise at what a zero beta model looks like.

My take;

If I were managing money, I would hedge with SH and then lever up 3x. The expectation would be for 50% better AR w/ half the drawdown w/r/t SP500 - if the model keep working!

There are other issues that I’m ignoring for now.

Yes i have variable hedging strategy and crash protection based on CNN fear and greed momentum

Usially i hedge with my short strategy but now i am getting 4% refunds on hedging SP i am shorting mostly with SP futures, the slippage and transaction cost are also 100x smaller

1 Like

Wow! Options as a way of reducing transaction cost and/or getting leverage? Whatever your answer, very advanced. Nice! I am not capable of that in my SEP-IRA account. Not sure I could do that with a regular account. Are you already selling options or this is all SP futures. which I guess is a type of option? Anyway, pretty uninformed but pretty cool.

Do you have a historical database of the CNN fear and gread index?
I looked for that a while back and could not find one.

Although I did find a way to supposedly recreate it with python.
I have not tried it yet.
https://quantopian-archive.netlify.app/notebooks/notebooks/quantopian_notebook_192.html

Tony

I scraped it using waybackmachine, i also recreated it, but it is not 100% the same.
I want to provide my crash protection indicator soon as a service though, not willing to give it for free.

Now i am working on a enhanced one, which uses additional sets of data and machine learning to recognize hidden stock market downside risk.

Judith,

Getting back to your original question about testing your model’s validity. I fully agree with Yuval. I focused on the number of years of excess returns because it was a new idea (and a good one). But felt i was probably already getting wonky with my mathematical answer.

The best way to formalize what Yuval suggests might be a Wilcoxon Signed-Rank test as you probably already know. Monthly or weekly data if you are not happy with a smaller n.

Also it would be a walk in the part for you to try some Bayesian methods, bootstrapping etc?

IMHO, a p < 0.01 (or BF 10) is necessary but not sufficient (using a Wilcoxon Signed-Rank test say). A better-than-that p-value could be noise or pure garbage still, but anything worse is guaranteed to be useless going forward out-of-sample. Remember, we are NOT randomly selecting anything (including features). We are starting with cherry-picked factors and already over-optimized public ranking system. Some of the stuff is even published which is not necessary a good thing on the topic of cherry-picking. Or, getting back to my point, what p-value you should choose. Or to restate that simply, a published feature (or anything published) should demand a higher (not lower) p-value. Much, much higher if you think about it. Support and a source: Bonferroni correction

Just my approach to your question now that I think you more than understand the math. You probably have you own mathematical methods that you have found useful and I would be interested. FWIW.

Jim

You might find this useful. CNN has an unpublished API that lets you get the data back to 8/2020.

You can see it in its raw form here:
https://production.dataviz.cnn.io/index/fearandgreed/graphdata/2020-07-30

This will parse it into a dataframe.

import requests
import ast
import pandas as pd

url = r"https://production.dataviz.cnn.io/index/fearandgreed/graphdata/2020-07-30"
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:20.0) Gecko/20100101 Firefox/20.0'}
response = requests.get(url, headers=headers)
byte_obj = response.content
dict_str = byte_obj.decode("UTF-8")
mydata = ast.literal_eval(dict_str)
historical_fg = pd.DataFrame(mydata["fear_and_greed_historical"]["data"])
historical_fg["x"] = pd.to_datetime(historical_fg["x"], unit='ms')
historical_fg = historical_fg.rename(columns = {"x":"Date","y":"value"})
print(historical_fg)

Yeah i know about this, i have data from 2013

“Fear and greed Index” should not be difficult to implement.
It could even be a feature here on P123.
Marco?