Rank | Team | Rating | Win - Loss |

32 | STL | 0.014 | 0-3 |

31 | KAN | 0.014 | 0-3 |

30 | IND | 0.036 | 0-3 |

29 | MIA | 0.072 | 0-3 |

28 | SEA | 0.128 | 1-2 |

27 | ATL | 0.143 | 1-2 |

26 | CHI | 0.273 | 1-2 |

25 | JAC | 0.363 | 1-2 |

24 | CIN | 0.373 | 1-2 |

23 | PHI | 0.400 | 1-2 |

22 | DEN | 0.411 | 1-2 |

21 | CAR | 0.424 | 1-2 |

20 | ARI | 0.430 | 1-2 |

19 | TAM | 0.467 | 2-1 |

18 | CLE | 0.479 | 2-1 |

17 | SDG | 0.491 | 2-1 |

16 | MIN | 0.497 | 0-3 |

15 | NOR | 0.509 | 2-1 |

14 | TEN | 0.582 | 2-1 |

13 | NYJ | 0.601 | 2-1 |

12 | DAL | 0.625 | 2-1 |

11 | PIT | 0.647 | 2-1 |

10 | WAS | 0.691 | 2-1 |

9 | NYG | 0.780 | 2-1 |

8 | BUF | 0.794 | 3-0 |

7 | SFO | 0.794 | 2-1 |

6 | BAL | 0.798 | 2-1 |

5 | OAK | 0.898 | 2-1 |

4 | DET | 0.908 | 3-0 |

3 | HOU | 0.922 | 2-1 |

2 | NWE | 0.929 | 2-1 |

1 | GNB | 0.958 | 3-0 |

I finally convinced myself to use win/loss data in the ratings. Mostly because Minnesota ended up being one of my highest ranked teams this week (they had another game where they led for most of the game). I added another step between the raw numbers (integral difference in log-score) and the final rating. I used an arctan function (which fits the shape I wanted: sigmoidal with horizonal asymptotes), to transform raw scores into a new number that would be sensitive in the middle (most common) range, and flatten out at asymptotes at the end. Wins get a separate arctan function from losses. Here they are:

Wins get the blue line treatment. The best performances approach 100 and the worst performances approach 40. For losses, the worst performances reach 0 and the best performances reach 60. Since for a given game, the two opponents will have the same score in magnitude (one being negative), the formulas end up working in such a way that for a given game, the sum of the two teams (arctan'd) ratings always sum to 100. For that reason, I like to think of them as "win shares". Anyway, I like the way it works, and i like the way the rankings look. From there, as usual, each game score was weighed 90 - 100% by strength of opponent (rolling back self-inclusive strength of opponent score as I explained before). Then finally, that score is normalized between 0-1, assuming normal distribution.

Secondly, like I have done before, I ran the system into last year's games (I only kept data for Weeks 1 - 14 though). I broke down the rating into a few variables, and also added some variables (like East/West travel and North/South travel for the road team) and ran a regression for the dependent variable score differential. The ultimate goal would be to find a formula to pick against the spread, so that is why I am focusing here on point differential and not the binomial win/loss.

With this new twist, I got the best results yet. The std. error is as high as previous tries (~14.6, which is pretty huge once you realize we are talking about point differential), but the variables, and the model seem to be more statistically significant than before.

I eventually took out the travel numbers and just used the rating numbers (travel numbers had not even a hint of statistical significance and I think one of them went in the wrong direction). I split the remaining rating up into 4 variables - 1) Away team's previous performance on the road, 2) Home team's previous performance at home, 3) Away team's previous performance at home, and 4) Home team's previous performance on the road. As it has been doing for a while now with these regressions, variable number 4, the home team's previous performance on the road is the best predictor (highest magnitude and highest significance) of the point differential. Second most is variable 1. In general, something in this data is telling me that the best teams are the ones that can play well on the road. I guess the assumption is that most teams can play well at home, but not much fewer play as well on the road.

Linear regression output Margin is always Home final score subtracted by Away final score Dependent variables are split home/away versions of my final rating |

The next step was to see the predictions of the formula on last year's data (I know it's kind of circular). I had Excel calculate the "expected margin" or equivalently "predicted spread against the away team". I had long ago collected the closing spreads on each of these games too. So I had Excel compare the two: my predicted spread and Vegas' closing spread.

The next step was to craft a way to have Excel make a pick: either the road team would cover or not. If you assume a normal distribution around the mean "predicted spread" with std dev. from the regression formula, you can easily get Excel to give you the probability of a cover.

From there I calculated the number of correct and incorrect picks. There were 3 pushes on the year, so I discounted those. I also did not run it for weeks 1 - 4, since I assumed that there wasn't enough information for those weeks, and no weeks 15 - 17 because I didn't have that data.

With that said, the system went a fairly unimpressive 57.3% against the spread. Still not a complete failure though. Looking through some of the data to inspect what sort of patterns I could see, I discovered a couple things. Weeks 11 and 12 were nightmares. The system batted 375 for those weeks. Every other week is above 500. Did something strange happen in those weeks? Secondly, as the confidence of the systems picks went up (the probability of a cover, how distance from 50%?), it's correctness did NOT increase. My data set might be too small, but I suspect that the relationships here may not be linear actually, which is assumed when using linear regression.

I tweaked the coefficients a little bit, and the winning percentage seems to max out at 59.4%, which feels a lot better than 57. But of course, when tweaking the coefficients in order to get the maximum number of wins, it is the ones closest to 50% that I am moving from losses to wins (transforming the probability from 49% to 51% for example). This further screwed up the idea that the higher confidence picks should work out better.

Last note is that when picking straight winners, the system went 65.8% correct on its picks.

That's all for now. I will probably use the same formula for next week, and will start publishing predictions (probability of victory/probability of cover) in before Week 5.