Boosting your returns

Steve knows what he is talking about here.

The above demonstration with JASP was done in about an hour with the time mostly spent on writing and screen shots. And a little time with JASP.

Normally one would spend some time adjusting (and validating) the hyper parameters in a Boosting program.

The only hyper parameter I changed was “Shrinkage” to 0.01 (based on previous experience with boosting programs). I also changed the Training and Validation Data to K-fold with 5 folds which is not a hyper paramater. That was all the time that I spent. I did this before I saw how the test sample performed.

I thought my point was already made. And that no one would claim that changing these 2 things from their default settings was just too hard for a serious investor.

Anyone wanting to spend more time with JASP should also change “Min. observations in node” and “Interaction depth.” The defaults that I used here are almost certainly not optimal. And the optimal hyper parameters will be different for different data (including your data).

The real time that I have spend with boosting has been with XGBoost which is the program professionals use and it does offer some additional capabilities. But is XGBoost better than a neural net as Steve says?

Steve’s opinion of neural nets is shared by many. Here is a quote from the web. I do not think it is from a famous person but the same quote can be found everywhere:

“Xgboost is good for tabular data………whereas neural nets based deep learning is good for images….”

“Tabular data” is just what is found in an Excel spreadsheet.

I actually disagree with this blanket statement. TensorFlow can beat XGBoost SOMETIMES, I think.

But XGBoost is the place to start. And Steve is using TensorFlow too.

If you just want to make money you should see if Steve has something you can use.