Week 11
Week 12
Week 13
Week 14
Green Bay's rankings were 1, 1, 1, 2. The rating system emphatically loved this team.
Pittsburgh's rankings were 3, 3, 7, 6.
One reason for the difference off may be how much the system liked 2 other NFC teams.
New Orleans ranked 4, 2, 2, 1 (but they were upset by the Seahawks, which the system regularly ranked low: 21, 25, 26, 26).
Philadelphia ranked 2, 5, 5, 5.
New England (6, 6, 4, 4) and Baltimore (5, 4, 3, 3) were the other AFC front runners.
The Jets (11, 10, 10, 13) and Chicago (14, 13, 12, 17) never really did as well in the system as they did in the playoffs.
It's really only because Green Bay won the Super Bowl that I thought this experiment might be worth continuing this year. Otherwise, I may have given up on the idea.
The results for 2011 Week 1 are:
Rank | Team | Rating |
32 | IND | 0.029 |
31 | KAN | 0.046 |
30 | PIT | 0.052 |
29 | ATL | 0.105 |
28 | NOR | 0.133 |
27 | STL | 0.190 |
26 | TEN | 0.191 |
25 | SEA | 0.219 |
24 | NYJ | 0.220 |
23 | MIA | 0.241 |
22 | TAM | 0.275 |
21 | DEN | 0.287 |
20 | SDG | 0.350 |
19 | NYG | 0.403 |
18 | CLE | 0.443 |
17 | ARI | 0.465 |
16 | CAR | 0.535 |
15 | CIN | 0.557 |
14 | WAS | 0.597 |
13 | MIN | 0.650 |
12 | OAK | 0.713 |
11 | DET | 0.725 |
10 | NWE | 0.759 |
9 | DAL | 0.780 |
8 | SFO | 0.781 |
7 | JAC | 0.809 |
6 | PHI | 0.810 |
5 | GNB | 0.867 |
4 | CHI | 0.895 |
3 | BAL | 0.948 |
2 | BUF | 0.954 |
1 | HOU | 0.971 |
This is a reflection of nothing but a teams performance in Week 1. Again, it is blind of any human interpretation or bias.
I'll touch up on the methodology a little bit:
The basic and most fundamental premise for the rating system was based on the idea that occurred to me one day to look at integral difference. Those who have taken calculus may be familiar with the idea, but it is not as complex of an idea as anything in calculus. One way that the "integral" is described in calculus is as the area under a curve.
Area under the curve example. y = x^2, between 0 and 2 |
Each team would have it's own "curve" on the graph, and I wanted to evaluate the performance of one of the teams as the difference in the area. That is where the "difference" half of "integral difference" comes from. The team with the larger area performed better (regardless of the final score), and the measure of that difference (how much bigger its area was) would be a measure of its overall performance in the game.
Thursday night game Week 1 diagram |
Relationship between actual difference (x) and my "skewed" difference (y). As the lead becomes larger, it's rate of importance decreases. y = ln^2(x +1) |
The results you see for Week 1 above are exactly this measure for each teams performance this week. For all subsequent weeks, the scoring will be slightly different due to the added factor of strength of schedule.
Last year, I struggled a lot with how exactly to do SOS. I finally resigned to going only one team back. This means that I only looked at the previous performance of the current opponent. To be more precise though, those previous performances should also be skewed by the strength of the opponent's previous opponents. I did not do this with last year's system. The circularity becomes kind of difficult to deal with.
After struggling even more than I did last year, I found a system (doable with Excel) that allowed me to do a true SOS system. I implemented what I will call a moving-backwards-SOS-chain. I was able to avoid circularity by considering only past results and not looking into future performance. For this reason, obviously, Week 1 has no SOS factor. The Excel layout I had to configure to do this drove me nuts, and skated around the brink of circularity. However, I am fairly confident that it does what I want it to. Look at the previous opponents of the previous opponents of the previous opponents ...... of the current opponent. Always starting at Week 1 and ending at the current week. I still need to decide on a "skewing weight" for the strength of schedule. Last year, I almost arbitrarily decided on 90-100 %, meaning that the raw score would be multiplied by some number between 0.9 and 1 depending on the strength of opponent scored against.
Last year, I struggled a lot with how exactly to do SOS. I finally resigned to going only one team back. This means that I only looked at the previous performance of the current opponent. To be more precise though, those previous performances should also be skewed by the strength of the opponent's previous opponents. I did not do this with last year's system. The circularity becomes kind of difficult to deal with.
After struggling even more than I did last year, I found a system (doable with Excel) that allowed me to do a true SOS system. I implemented what I will call a moving-backwards-SOS-chain. I was able to avoid circularity by considering only past results and not looking into future performance. For this reason, obviously, Week 1 has no SOS factor. The Excel layout I had to configure to do this drove me nuts, and skated around the brink of circularity. However, I am fairly confident that it does what I want it to. Look at the previous opponents of the previous opponents of the previous opponents ...... of the current opponent. Always starting at Week 1 and ending at the current week. I still need to decide on a "skewing weight" for the strength of schedule. Last year, I almost arbitrarily decided on 90-100 %, meaning that the raw score would be multiplied by some number between 0.9 and 1 depending on the strength of opponent scored against.
No comments:
Post a Comment