The use of technical analysis

Once you’ve settled on a fundamental strategy, the fundamentals don’t tell you to much on when to buy the stock. To better plan the buy, I have frequently utilized ROC (40) in the grading system to acquire stocks that have recently fallen.

What are your experiences with technical analysis? Which indicators have produced statistically significant enough results to provide a value in an equity strategy?

There does not appear to be much in the academic literature that supports the use of technical analysis:

https://stocksoftresearch.com/technical-analysis-for-stocks/

https://www.cxoadvisory.com/technical-trading/technical-analysis-tested-globally/
https://www.cxoadvisory.com/commodity-futures/performance-of-technical-trading-rules-for-crude-oil-futures/
https://www.cxoadvisory.com/technical-trading/technical-analysis-as-a-mutual-fund-discriminator/
https://www.cxoadvisory.com/individual-investing/individual-investor-performance-by-motive-and-method/
https://www.cxoadvisory.com/technical-trading/technical-analysis-tested-on-long-run-djia-data/
https://www.cxoadvisory.com/technical-trading/updated-comprehensive-long-term-test-of-technical-currency-trading/
https://www.cxoadvisory.com/technical-trading/true-out-of-sample-test-of-best-technical-trading-rules/
https://www.cxoadvisory.com/investing-expertise/evaluating-5017-technical-trading-recommendations/
https://www.cxoadvisory.com/technical-trading/technical-trading-thoroughly-tested/
https://www.cxoadvisory.com/technical-trading/technical-indicator-model-of-stock-returns/
https://www.cxoadvisory.com/technical-trading/moving-average-rules-over-the-long-run/
https://www.cxoadvisory.com/technical-trading/taking-the-noise-out-of-technical-trading/
https://www.cxoadvisory.com/technical-trading/testing-the-indicators-of-barchart-com/
https://www.cxoadvisory.com/individual-investing/technical-analysis-a-drag/
https://www.cxoadvisory.com/technical-trading/technical-trading-rules-and-data-snooping-bias/
https://www.cxoadvisory.com/technical-trading/testing-the-head-and-shoulders-pattern/
https://www.cxoadvisory.com/technical-trading/bollinger-bands-buy-low-and-sell-high/
https://www.cxoadvisory.com/technical-trading/combining-rsi-and-macd-in-search-of-concentrated-abnormal-returns/
https://www.cxoadvisory.com/technical-trading/simple-test-of-macd-crossover-as-an-abnormal-returns-indicator/
https://www.cxoadvisory.com/technical-trading/classic-papers-returns-from-pattern-based-technical-analysis/
https://www.cxoadvisory.com/technical-trading/technical-analysis-of-market-bubbles-and-anti-bubbles/
https://www.cxoadvisory.com/volatility-effects/testing-a-complex-breakout-indicator/
https://www.cxoadvisory.com/technical-trading/trading-after-n-day-highs-and-lows/
https://www.cxoadvisory.com/volatility-effects/does-a-long-term-moving-average-indicator-predict-big-days/
https://www.cxoadvisory.com/technical-trading/simple-test-of-rsi-as-an-abnormal-returns-indicator/
https://www.cxoadvisory.com/technical-trading/unexplained-volume-as-a-critical-indicator/
https://www.cxoadvisory.com/technical-trading/does-technical-trading-work-with-commodity-futures/
https://www.cxoadvisory.com/technical-trading/out-of-sample-tests-of-bullish-regime-2-day-rsi-signals/
https://www.cxoadvisory.com/technical-trading/candlesticks-fiddlesticks/

RE: THE USE OF TECHNICAL ANALYSIS

I love it!

To paraphrase Warren Buffett, “There is nobody I would rather compete against in the market than someone who has already given up.”

First of all, 98% of academic literature about markets doesn’t reflect reality, IMO. That may be related to the fact that 98% of academics have never purchased or sold a share of stock in their lives, probably because they are highly risk-averse individuals. Because of the lack of practical experience, their assumptions and hypothesis are often way off the mark.

For example, consider the so-called “Value/Growth” anomaly where academics identified “Value” as low-PE or low Price-to-Book stocks. Fair enough (although neither metric has had any value in 25 years), but what did they use to represent “Growth?” Even worse, they used HIGH-PE, or HIGH Price-to Book stocks! The comparison they make is actually cheap vs. expensive, then they call it “Value vs Growth.” Of course, cheap is almost always going to win that long-run comparison. It’s just silly.

Secondly, investors should never use daily prices for TA. During any given day of the week (i.e., a market session), tens of millions of relatively inexperienced individuals are placing trades in response to their emotions and innate biases – conscious or subconscious. As a result, daily/intraday prices generate an enormous amount of meaningless noise. Weekly prices (Friday’s close) are far more accurate. Longer periods (monthly, quarterly, annual) are even more accurate, but you’ll often miss out on many good trades that might occur on any given weekend. What’s optimal? Combining different time frames (e.g., weekly and monthly) or combining multiple technical indicators with uncorrelated approaches (e.g., moving averages combined with oscillators, breadth measures, etc.) produce far more robust signals.

The reason WEEKLY closing prices are far more accurate than DAILY? Professional investors (institutional investors, mutual funds, portfolio managers) usually close out trades at Friday’s close to avoid headline risk over the weekend. A prominent example of market-moving headline risk was the Lehman Brothers bankruptcy announcement, which came over a weekend in September 2008, prompting a waterfall selloff in stocks in the following weeks. However, every weekend, there are dozens or hundreds of less-famous headlines that can damage an individual stock, industry-based ETF, and many more. Hence, weekly closes are far more accurate because that’s when big money managers trade equities/ETFs at the appropriate fair value or rational technical level. Fortunately for P123 users, we now have weekly price series available to us!

Lastly, 98% of the TA studies I’ve seen (and 100% of the CXOAdvisory stuff) are so simplistic they never had a glimmer of hope. CXO has a well-deserved reputation for labeling someone who consistently succeeds over many years as “lucky.” However, you can’t just apply a 50/200-day crossover (or similar) to a system and expect it to work. In today’s world, the average individual investor has far more computing power at their fingertips than was available from the most powerful supercomputers a few decades ago.

All that processing power, petabyte upon petabyte of high-quality data, and a plethora of backtesting software and websites make it easy for tens of millions of investors to constantly test and apply simple signals. As a result, they have traded away all the easy technical edges. You have to get more sophisticated than just using a MA crossover for robust, valuable technical signals! Consider putting real effort into building signals using RSI, CCI, Stochastics, Breadth, and all the other technical tools that are available in P123 rather than using the most basic, beginner-level measures.

The six strategies I’m currently managing (I’m adding more in the next month) select about a dozen each from more than 50 data sets to create multiple, uncorrelated signal composites. I use macroeconomic, fundamental, sentiment, and technical signals to create composites. All are far more sophisticated than anything I have seen on P123 or anywhere else. By putting in the hard work and long hours to combine multiple measures in unusual ways (such as a homodyne oscillator, the MESA algorithm, and various phasors built externally and imported into P123), I avoid signal deterioration and generate a more accurate market mode/trend identification.

One simple example from my TA indicator set of those 50+ signal systems is one I dubbed the “Rainbow Indicator” because of how its most basic form looks on a chart using a different color for each input. This single indicator combines 15 different SMA and EMA measures plus four different breadth measures. It’s not just for market timing (although it works great for that), but it also can be applied to individual ETFs or stocks in a Universe. Here is how it appears when all inputs are combined and applied to the S&P 500:

Converting that raw measure into a digital signal looks like this:

Zooming in to the last 5 years shows that the series anticipated the Covid Crash, something that 95% of conventional, simple market-timing signals couldn’t or didn’t accomplish:

Building these sophisticated composites isn’t easy. It took a commitment of blood, sweat, tears, and thousands of person-hours over the last 16 years (I began focusing on technicals, building upon my foundation of fundamentals and macroeconomics, in 2006) to develop these sophisticated technical indicators and the strategies using them. However, I can assure you it’s genuinely worth the effort in the long run. Best of luck!

TL;DR. Let me just give my conclusion first: if you cannot find a pattern with the code below then a pattern does not exist. The code is easy enough that you can reach your own conclusions about technical data at home.

Also, P123 might want to embrace what Chris is saying, make it their own (with modifications and possible improvements) and market it.

For those who know basic Python and think technical data may have some value:

P123 already excels as a solid method for automating the search for fundamentals. They have the best fundamental data that can be found at a reasonable price (CompuStat and FactSet). And have a lot of talented people spending a lot of time processing that data.

True, one could overfit the data but that is on the individual overfitter. Many have found some ways to mitigate the problems of overfitting. Yuval has shared some of his methods.

The search for technical methods can be automated too. And it can done easily with universally accepted AI models. The wheel would not have to be reinvented. This turns out to be a lot less computationally expensive that what P123 does with fundamentals. You can do it at home.

There would be little additional computational expense for P123 to add this.

Portfolio Visualizer keeps making an appearance over at Seeking Alpha. Combining machine learning to select ETFs and then combining the ETFs in the way Portfolio Visualizer does would surpass the usefulness of Portfolio Visualizer. And would presumably be marketable.

P123 could be better than Portfolio Visualizer at a small computational expense. And market that.

P123 does great work with fundamentals. They could also be the market leader for technical data and ETFs.

Do not forget P123 has already made FED data available and this could be used as a factor. This is a huge!!! Okay that does mix some fundamentals with technical data, sorry.

Walk forward–used here–makes it impossible to have any look-ahead bias. There are 2 nested “for loops” in this code. Sorry I don’t know how to paste the necessary indents.

These are all classification models. One might classify an ETF by whether it outperforms or underperforms a benchmark.

Anyway, all the code you would ever need to run a classification model at home. It did run successfully just now (with the indents):

#Interval Determination for the training data: q is the number of training months. 24 months or 2 years here.
q=24

#Import numpy and Pandas
import numpy as np
import pandas as pd

#Import the ML algorithms to try
from sklearn.svm import SVC
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression

#import the metrics
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import confusion_matrix

Import Cross-validation and declare the inner and outer cross-validation strategies

from sklearn.model_selection import cross_val_score, KFold
from sklearn.model_selection import GridSearchCV
inner_cv = KFold(n_splits=6, shuffle=False)
outer_cv = KFold(n_splits=6, shuffle=False)

#import the data
df=pd.read_csv(‘~/Desktop/SLYGandSLYV.csv’)

#creat some empty DataFrames to append or concatenate later
everything=[]
cat_plus_c=[]
cat_plus_c=pd.DataFrame(cat_plus_c)
cat_plus_p=[]
cat_plus_p=pd.DataFrame(cat_plus_p)
everything=[]
everything=pd.DataFrame(everything)
ag=[]

z=(‘ETF1’,‘ETF2’,‘ETF3’,‘ETF4’,‘ETF5’,‘ETF6’,‘ETF7’,‘ETF8’,‘ETF9’,‘ETF10’,‘ETF11’,‘ETF12’,‘ETF13’,‘ETF14’,‘ETF15’,‘ETF16’)
for i in range(len(z)):
cat_plus_c=[]
cat_plus_c=pd.DataFrame(cat_plus_c)
cat_plus_p=[]
cat_plus_p=pd.DataFrame(cat_plus_p)
everything=pd.DataFrame(everything)
x=z[i]
for j in range (206-q):
#x=z[i]
d=df[[‘date’]]
X=df[[‘factor1’,‘factor2’,‘factor3’,‘factor4’,‘factor5’,‘factor6’,‘factor7’,‘factor8’,‘factor9’,‘ffactor10’,‘factor11’,‘factor12’,‘factor13’,‘factor14’,‘factor15’,‘factor16’]]
Xt=X[j:j+q]
y=df[x]
yt=y[j:j+q]

#models one might try by removing the commmenting out
#parameters = {‘kernel’:(‘linear’,‘poly’,‘rbf’,‘sigmoid’)}
#knn=SVC(random_state=1)
#clf = GridSearchCV(knn, parameters, cv=inner_cv, n_jobs=-1,scoring=‘f1’)
#parameters = {‘n_neighbors’:[1,3,5]}
#clf=KNeighborsClassifier()
#clf = GridSearchCV(knn, parameters, cv=inner_cv, n_jobs=-1)
#clf =BernoulliNB(alpha=.01)
#clf=LogisticRegression()
#clf=GaussianNB()
#clf=RandomForestClassifier(random_state=1, n_estimators=100)
#clf=SVC(kernel = ‘linear’,random_state=1)
#clf=SVC(kernel = ‘poly’,random_state=1)
#clf=GradientBoostingClassifier(random_state=3)

#Trains some data. Maybe try Naive Bayes to start with; the simples and quickest running ML algorithm
clf =BernoulliNB(alpha=.01)
clf.fit(Xt,yt)

#more advanced for nested cross-validation: might use for final selection of a model later
# Outer cross-validation to compute the testing score
#test_score = cross_val_score(clf, Xt, yt, cv=outer_cv, n_jobs=3)
#print(x,f"The mean score using nested cross-validation is: “f”{test_score.mean():.5f} ± {test_score.std():.5f}")
#print(x,'Best estimator for GridCV: ',clf.best_estimator_)
#print(x,'Best parameters for GridCV: ',clf.best_params_)
#print(x,'Best score for GridCV: ',clf.best_score_)

#walk it forward and record the results
Xtest=X[j+q:j+q+1]
ytest=y[j+q:j+q+1]
pred = clf.predict(Xtest)
p=pred
p=pd.DataFrame(p)
p.columns=[‘pred’]
c=pd.DataFrame.reset_index(ytest,drop=True)
c=pd.DataFrame(c)
cat_plus_c=pd.concat([cat_plus_c,c])
cat_plus_p=pd.concat([cat_plus_p,p])

#Get some metrics and send the data to spreadsheets
d=df[[‘date’]]
dt=pd.DataFrame.reset_index(d[q:],drop=True)
cat_plus_c=pd.DataFrame.reset_index(cat_plus_c,drop=True)
cat_plus_p=pd.DataFrame.reset_index(cat_plus_p,drop=True)
cat=pd.concat([dt,cat_plus_c,cat_plus_p],axis=1)
cat.to_csv(‘~/Desktop/cat.csv’)
accuracy = accuracy_score(cat_plus_c,cat_plus_p)
balanced_accuracy =balanced_accuracy_score(cat_plus_c,cat_plus_p)
precision=precision_score(cat_plus_c,cat_plus_p)
cm = confusion_matrix(cat_plus_c,cat_plus_p)
f=f1_score(cat_plus_c,cat_plus_p)
print(x,‘Accuracy =’, accuracy)
print(x,‘Balanced accuracy =’, balanced_accuracy)
print(x,‘Precision =’, precision)
print(x,'f1_score = ',f)
print(x,‘Confusion_matrix = ‘,cm)
ag.append(accuracy)
everything=pd.concat([everything,cat], axis=1)
pag=pd.Series(ag)
print (‘class_accuracy_mean =’,pag.mean())
everything.to_csv(’~/Desktop/everything.csv’)

Excellent points (as always), Jim! I look forward to trying your script this week.

Chris,

As with P123 classic for fundamentals, it is the factors that one uses that is most important. With your knowledge of what works, I am sure you will find an effective model using this method. Not necessarily an improvement over what your are already doing, obviously.

Note, the commented-out code can be used for nested cross-validation (I am not sure that it is complete here and would run without modification). I have a habit of leaving in code–potentially adding it back in later–but also it can get incomplete or un-runnable when variables are changed, commented-out code is not paste correctly etc.

But it is easy to use nested cross-validation to determine the best model (e.g., logistic regression vs. gradient boosting classifier vs support vector machine) for each ETF and for each period.

One can also select features in Scikit-Learn (using RFECV and Pipeline for example) but it is also true that some ML programs (e.g, GradingBoostingClassier) do that automatically.

Put more simply this code can be improved upon.

You are a better coder than I am if you can make sense of the code without some clarification, so please email me with any questions. think the indents might be preserved in an email, too. If other members might be interested we can do it in the forum.

Some clarification of what I have done with the spreadsheet used here: I took 16 ETFs (an even number) and calculated the returns for a month (open to open). I found the median return for those 16 ETFs for each month. Then each ETFs was classified as outperforming the median or underperforming the median.

I will leave the factors to use up to you. But I used the previous day’s close for my factors making this 100% PIT.

The way I did the classification makes it valid to average the accuracy scores!!!

And I would have to review this to be sure, but because the numbers in each classification are equal (even number of ETFs and using the median to classify insurers this) the aggregated accuracy, precision, recall, and balanced_accuracy are also equal. I could be wrong on some of my metrics.

The program prints out the accuracy, precision, recall, f1 etc. for each ETF And it is trivial to find other metrics in Scikit-Learn as you know.

To be sure, averaging the accuracy of all of the ETFs at the end is valid with the way the spreadsheet is constructed. So a large number of factors, ML methods and ETFs can be evaluated at once AND THE RESULTS AGGREGATED.

Also, using classification ML models eliminates any problems with outliers. Some of these models are non-parametric and eliminate any concerns about the normality of the data. Using the median of group of ETFs is effective for eliminating much or the random market noise

As always, ETF data is non-stationary and that remains a potential concern.

The program downloads whether to buy or sell each of the 16 ETFs (“everything” file). This can be munged and loaded into Portfolio Visualizer.

With Portfolio Visualizer one can maximized the sharpe ratio of the ETFs selected to buy etc. Basically, it is rare that one cannot reduce the volatility and improve the Sharp ratio of the ETFs selected by using Portfolio Visualizer.

Be aware that I use a MAC operating system which will mainly mean using a different file system to read and write the Excel .csv files.

Let me know if there is anything I can do to help.

-Jim

Thank you for bringing up that subject!!!

While I am far away from automating my technical analysis on trading systems with “clean” factor weightings (value, quality, size, momentum etc.)
I put in 90% of my R&D Time into the TA approach combined with my trading systems.

  1. The target of that approach is to first determine a risk on or risk off regime.

  2. The second step is to determine the systems (that are loaded with clean factors / industries) that are “priced up” by the market, e.g.
    that show relative strength and to be exactly in those (clean factor) systems if in a risk on market (and only be very lightly invested in a risk off market).

For that I put my cleanest factor 25 Systems (tons of variations of small cap value momentum, high beta (mid cap, big cap), industry-based systems (Oil, Materials, Healthcare, Tech), rank momentum systems), deep value (strong lately!!!) into one live book and put those stocks into TC2000 where I analyze those 150 Stocks in TC2000.

I than monitor the top 25 Stocks (4 Week Momentum for the bigger volume stocks, 3 Months Momentum for the small caps) technically and fundamentally.

Stuff like PRPH, DBTX, BOLT (all healthcare, my healthcare systems did very well the last 4 Weeks) and DQ (China, China Systems showed relative strength) came up in the last weeks.

Recent trading success came more from avoiding mistakes (I was long illiquid small cap value momentum from Feb-23June, hedged with a IWM Short and sold out to 100% cash on June 23 because Illiquid small cap momentum was losing relative strength) rather than making a big home run.

Up about 11% YTD, that is not much, but at least I avoided to be long high beta, crypto and lately out of inflation profiting illiquid small cap value momentum systems (since they started to show weakness, especially with the bad bounce on the recent market mini rally).
Not enough proof to know if this approach is valid, but the results are promising so far.

Also, I studied a ton of “star” traders that have a proven track record and match their approach to scientific papers.
What is very interesting, that those traders “front run” academics, e.g., that they are using frameworks years and years before academics catch up (happened on factor momentum and short term momentum (4 Weeks) on highly liquid stocks).

So, while I am more critical about academics now than I was, when I see an intersection of traders being successful trading one approach and academics seeing the same, I am very interested in that approach.

From what I can see what most successful traders use: Short term momentum (4 Weeks) on highly liquid stocks that show (Beta adjusted!) relative strength to the overall market. Also, they trade “hot” themes (not memes!), right now they concentrate on healthcare stuff, and they have been riding the energy stuff until lately.

Those traders are extremely flexible (and put in 60 hour++ weeks!!!), agnostic and do not care about the underlying idea or on the why. That does not mean they do not look at fundamentals, they do. But before they look at fundamentals the stock needs to be strong and show relative strength to the market.

They are very good at seeing general market risk.
Also, what I was seeing the last 2 Years: Most investors simply got lucky with their approach (high beta, crypto investors in 2020/2021 or long inflation in 2021 until recent), they fall in love with their thesis and do not change when the market in not “pricing up” the factor exposures they are in love with.

When sentiment gets hot on twitter and arguments against their thesis gets answered with “calling names” that means something. The disbelieve of high beta investors, crypto investors was astronomical when we called out at the beginning of December that this trade is over (since even the strongest high beta systems showed extremely weak relative performance). Same happens now with long “inflation” (Oil, Commodities etc.) investors.

[Of cause there is the valid approach to be a longer term trader investor and simply choose factors that do well on average (like illiquid small cap value momentum) and ride out the DDs, not everybody can put in 60 hour+ weeks!].

But I am to much spirited by high performance traders and on what they do (esp. their flexibility and agnostic approach), therefore the heavy R&D in factor momentum and technical analysis of capital curves and then TA (and fundamental analysis) of stocks out of those strongest systems while having a good risk on / risk off approach concerning the general market.

I am pretty excited about that stuff!!!

1 Like

Andreas,

How did you go about studying these “star” traders? Where did you find them?

Thanks,

  • Yuval

Some general comments about the history of technical analysis.

We could start with the Turtle Traders. Not to forget Jesse Livermore, William P. Hamilton, Robert Rhea, Edson Gould etc.

Any objective analysis would say there are more than a few examples in the Market Wizards and New Market Wizards (too numberous to name here).

BUT BUT BUT, I would not want to put words into anyone else’s mouth but if someone said some of the technical indicators might be arbitraged away for equities now then I could not disagree. In fact, I might be inclined to agree.

After all why do we talk about “fading a technical signal now?”

But here is my question: Chris uses ETFs. Many of these account for a HUGE amount of capital and trading volume. I am not sure even Jim Simons (RT) can always arbitrage away a signal in a day.

Also P123 has the ability to use fundaments (FED data) as features. A good move I think. But either CPI matters for commodities and the Yield curve matters for Bonds or they don’t. It is commonly stated that the PMI matters for equities. You can get FED data with or without P123. Decide for yourself with the code above would be my recommendation.

Find your own signals and prove they are effective (or not effective).

In any case: Thank you Chris. And, FWIW, I think much of this can actually be proven objectively.

Some signals can be proven NOT TO WORK. It goes both ways.

[color=firebrick]BTW, is P123 still looking to use machine learning to recognize technical patterns?[/color] A real question. And can we see if that is one of the technical signals that still works if P123 is still pursuing that?

Negative data can be the most important data. I don’t think it always shakes out one way or the other.

[color=firebrick]Just to reiterate, if one of the ML methods in the code above cannot find a pattern there probably is not a pattern or any pattern that once existed has been arbitraged away. That can be useful information.[/color]

And the effectiveness (or at least the accuracy, recall. precision, f1 etc) of any useful indicators can be fully quantitated.

Yuval, a lot come from the US Trading Championships

https://twitter.com/Trader_mcaruso
https://twitter.com/markminervini

Also hints from my Trader Network:

Stockbee
https://twitter.com/Qullamaggie

I know there is a big discussion for example if https://twitter.com/markminervini is legit and the big question on why those traders sell services
or teach for no fee (https://twitter.com/Qullamaggie) when they are vastly profitable (beeing full time by my self, I can agree, that there are good
reasons for beeing profitable and having a service at the same time (especially connecting with people, its a very lonly business otherwise).

Not going into that discussion, everybody has to find out by themselfes and of cause I can never be sure 100%.

What I saw conviced me, especially when I mapped the frameworks of those traders to newer academic reasearch.

1 Like

Here is a site and programmer/trader that interests me: https://etfoptimize.com

I think Chris has alway had a point in the forum. I take his comments/suggestions seriously.

But leave that aside for a moment. Here is a site that has a pretty good and consistent following. From a business perspective it might be interesting: https://www.portfoliovisualizer.com

The problem with Portfolio Visualizer (and where dramatic improvements can be made) is that is does not have much in the way of indicators (technical or otherwise). What it has is rudimentary.

My only point for P123 is that it would be relatively trivial to dramatically improve on Portfolio Visualizer’s site if there happens to be a business case for it. The computing power required is not large compared to what P123 does now. Using the ML algorithms above would be just one way to make a dramatic improvement and it is clearly not computer-resource intensive.

We don’t really know whether P123 employs a ML specialist anymore (we never hear from her in the forum if she is still there). Or whether P123 continues to focus resources on AI programs recognizing classic trading patterns (a pretty resource intensive and complex coding endeavor that may not be possible for P123 in the end).

P123 bought some GPU’s (or pays for access) in an attempt to complete this maybe? How many patterns can it recognize? What is its accuracy (precision, recall)? Can it learn and recognize new patterns? How much time have you spent? Are you done with the coding?

Personally, I would rather use the FED data with AI models and it is already in tabular form on P123’s computers. [color=firebrick]I think that would be a huge improvement over what Portfolio Visualizer does and everyone would recognize it. I think some people at Portfolio Visualizer would be signing up at P123.[/color]

P123 can debate Chris, Andreas and others about the value of technical indicators. P123 will be right on many counts. Me, I would just ask if there is a market.

For now, if you want to do ML you need to know some Python. The above program is one I have found helpful. I am absolutely sure there are better programs. Working to prove that my program is not that good would not address whether there is a market for good, professional programs.

Many find P123’s API helpful. We do not know what everyone is doing with the API but it suggests they might be using some algorithms that P123 does not provide now. There may or may not be a business case for P123 to provide some of those algorithms themselves (without a lot of debate from P123 about what P123 think they should be doing with outside programs).

P123 could consider whether there is a business case for provide programs for people focused on doing other things with P123’s API or for those not actively running Python programs now.

P123 is free to focus its resources wherever it wants without paying attention to Chris or Andreas or even Yuval really. Unless, Yuval thinks technical trading patterns is the one technical indicator worth pursuing.

Me I have no problem with P123 providing technical trading patterns. I just wonder if that offers the best bang-for-the-buck. If that is the best use of resources.

Just one idea if P123 actually takes feature suggestions anymore. Or if P123 welcomes any discussion of any ongoing development of AI capabilities.

This was an interesting conversation from which I can always learn something new, but I’m still looking for studies, or even self-studies, that illustrate how to utilize technical analysis to enhance the timing of my own stock purchases.

The first research, which I mentioned above, looked at a variety of known indicators but found that none of them were particularly predictive for 10, 20, or 50 days. I also tested the numerous technical criteria and rating system rules here at 123P, but none of them produced higher results.

That is not to imply that I do not consider share price trends, partially by assigning a bigger weight in the case of a decline in ROC (40), but it is not much more advanced than picking up some of the reverse effect.

I also have a risk on\off strategy (fundamental) to avoid the worst market moments, but I’m now seeking for other people’s experiences to apply technical analysis as a last review to better timing the buy.

I believe there is something in technical analysis, but I believe it is mostly technical analysis combined with others, or more complex models with IA.

Anyway, thanks for your contributions, and please feel free to add more.

M,

Specific to your question, I think relative strength works over short periods still. And the number of papers are too numerous to site here.

While more than you asked for, I do tend to want to look at this myself. To avoid overfitting–from selection bias of ETFs-- I used just the SPDRs GLD and TLT and took the 5 ETFs (out of 11 or half) with the best relative strength (monthly rebalance). And weighted them using mean-variance-optimization. Minimum variance here.

I signed out of my account to run this. One can check this themselves at Portfolio Visualizer to see how it works for other ETFs and see if there might be some selection bias after all–despite my rather generic choice of ETFs.

The above Python code was just a way to look at additional technical indicators–some a bit more esoteric.

In short, I agree with you. I don’t think there is any doubt that there are still some technical indicators that work. I hope others have additional indicators to share with you. There are almost certainly better indicators but clearly they exist.

Chris and Georg have named a few in the forum over the years (and requested that a few be made available in P123’s data). Marc Gerstein seemed to believe in Chaikin Money Flow (CMF) although there may have had some incentives to have that opinion (he has been employed by Marc Chaikin). I defer to them for a more comprehensive list. I am more about the reinforcement-learning “policies” which can be simple. Like: select 5 ETFs with the best relative strength and weigh them using mean-variance-optimization above.

Link to how CMF is calculated and the theory behind it: [url=https://www.fidelity.com/learning-center/trading-investing/technical-analysis/technical-indicator-guide/cmf]https://www.fidelity.com/learning-center/trading-investing/technical-analysis/technical-indicator-guide/cmf[/url]


Thank you much, Jim! It’s always a pleasure to find an ally on the internet, where unsubstantiated ad-hominem take-downs seem to be the game’s name. I’ve always felt that the proof of the pudding is in real-time results. I’m glad you (one of the sharpest minds on Portfolio123) recognize that. Thank you, sir…

2022 Results

1 Like

Jim and etc: TL;DR - Go lose money… Welcome to the real world where fundamentals don’t matter…

I tried for years. Morons never learn a lesson. GFY

Mr, @Whycliffes, sir!

Do you feel that you received an answer to your questions about technical analysis? When I read your initial post I was quite interested in seeing the responses, as I had spent a good deal of time learning technical analysis.

Perhaps I failed to read the responses carefully enough, but the conclusion I reached in reading the responses are (1) some (not all) technical analysis techniques can provide guidance for when to buy and sell, and (2) as to which ones, you need to figure it out for yourself. In other words, profitable TA methods exist, but they won’t be simple rules, and the more complicated rules that do work the folks who figured it out aren’t going to share with you. Is that the conclusion you reached?

Cary

My conclusion:
I haven’t found a study or meta-studies that are convinced that technical analysis works
But I agree with ETFOptimizer that to many studies test very simple technical indicators, and tend to test them in isolation. If technical analysis is to work, this must be done together with others to correct for the weaknesses that are inherent in the individual indicator.

I have read some studies that suggest that technical indicators may work on more trending asset classes than stocks, such as forex and commodities,

That said, it is recognized in financial theory that momentum and short term reversal work, so indirectly several of the technical indicators will measure in some way these two, but it probably won’t, in my view, mean that technical analysis overall works very well.

I’v thought sometimes that the reason technical analysis is so popular is first of all that it’s easier to learn and understand than the vast amount of information that needs to be incorporated into fundamental micro or macro analysis. The second is that technical analysis is a bit too easy to manipulate. With small changes in the indicators, they can provide good returns in a backtest, but which in turn makes them so optimized that they are unlikely to provide much value for future prediction of the price action of a stock.

Finally, of the simulations and screens that are available here as “public” , very few use technical analysis alone, and those who do often do poorly.

But I hope more people will publish their results with just technical analysis, so that it will be possible to probe this further.

In any case, if technical analysis is to be used, the simplest indicators cannot be used in isolation, they must be used together with others and it is important that they are not overoptimized.

But there are people in this forum who are way smarter than me, and who have done this “game” much longer, hope they can give some feedback?

I ran a 70 year timing backtest of the SP500 Index in excel, using only SMA and MACDD. Basically, the timer keeps up with the index with much lower drawdowns. It’s pretty well known that SMA works, but it’s not very exciting, and periods of whiplash occur which are emotionally hard to navigate. Here is a log plot of my test:

Mr. @sglinski, sir!

Thank you for your post. I have heard of using the 200 day moving average before, but not the MACD. Can you elaborate? I am currently using a version of the 200 Day moving average together with one of the economic indicators that Philosophical Economics mentioned back in 2016. (I only learned about those studies last year.) Perhaps the MACD would be superior.

But back to the question that Mr. @Whycliffes originally raised, do you know of an indicator or set of indicators that will improve the odds of getting a lower price for buys and a higher price for sells when acting on the live portfolio recommendations?

Cary

Good question and this gets down to trading using technicals and there is no one way to do this.
I think what everyone wants is to have the highest profit per trade on winners and lowest for losers and having a high percent winners helps. This also helps you control slippage. One way to do this is to identify stocks that have short term momentum (30 - 45 bars) and then have a shorter term pull back ~ 5 days. There are many ways to code both of these. Buying on the pull back you are getting a lower price for the stock. (Buy the dip and sell the rip). The next part is more of a trading question to control slippage. If the stock jumps 10% at the open most people will pass on it and wait for it to pull back again. If it does not pull back I pass. If your average profit per trade is 5% and the stock jumps 10% your odds of success are less. The simulation will use the average of the high and the low and would buy that stock that jumps 10% and would use the average of the high and the low so some people would just buy it. I’m sure other members have their own rules and code for this. The execution of the trade is very difficult. It comes down to trust the system and just buy it or use your own rules the day of the trade since you don’t know what the high will be for the day or the low until end of the market day. The same problem exists for the sell side. If your stock jumps 10% the day your system signals a sell that’s a good problem if it drops 10% that’s a bad problem. You just hope your system KPI’s hold for the future and it averages out over many years. This is why I avoid systems that have low profit per trade and high turnover. You really have to think about controlling slippage. I think there are members who can make this work but I can’t.

Cheers,
MV

1 Like

Great feedback here. Other TA tips: Buy stocks with a recent surge in volume, regardless of price. Use industry momentum, both short-term and long-term, and apply it anywhere from sector down to subindustry. Measure momentum (both individual stock and industry) using all sorts of methods, from the omega ratio to SMAs to VMAs to simple Close(X)/Close(X) to Close(X)/PriceH to RSI. One tip that I find works well for me is to measure individual stock momentum a month ago rather than now. That makes it much easier to buy on a dip, and it also means you don’t fall prey to mean reversion as often, since momentum is usually long-term and mean reversion is usually short-term.

3 Likes