Wednesday, December 5, 2018

Predicting Wins with OBP

The first follow up to the post on creating a win predictor uses on-base percentage (OBP). Using the same training and test games, four data points were added to the inputs to the neural network (NN); team OBP for the road pitchers, road hitters, home pitchers, and home hitters. For each of the above with less than 600 PA, this was regressed to a long term average OBP of .328.

The test is to predict the probability of the road team winning.* The results were more in line with the Log5 method. Log5 predicted 3084 win, the NN with OBP (NNOBP) predicted 3054 wins. There were actually 2851 wins in the test data set. The predictions for the NNOBP were more spread out than for the NN based simply on winning percentages, with a minimum prediction of 0.41 and a maximum prediction of 0.61. The average mean squared error is 0.497, so higher than our original simple model.

*Since this is binary, it’s 1- the probability of the home team winning.

I would not call this model better, but the OBP does seem have some ability to separate good teams from poor teams. The next step will be to add the opposition OBP of the starting pitchers.



from baseballmusings.com https://ift.tt/2G2dWI5

No comments:

Post a Comment