The final ratings for the first semester of the 2016-17 season are posted.
One unforseen consequence of posting the previous edition of the ratings when there were so many quality teams without the required number of rounds to be listed is that it artificially inflated the rankings of a lot of teams. As a result, many teams that were previously ranked in the top 50 dropped a number of spots without actually performing any worse. They were just bumped down as new teams were added to the list. In the future, I'll have to consider whether it might be better to just wait until the end of the first semester for the first release.
I wanted to wait until the coaches poll was out to post the new ratings. I will refrain from commenting in any detail about specific teams, but it is interesting to think about the differences in where some teams are ranked. I doubt that there is a single factor that can explain all of the instances where there is divergence between the computer rating and the human poll. However, if I were to make a couple of guesses about what might be at work, I think the following might be relevant:
I hope to get my hands on the raw data from the coaches' ballots to see how much consensus/dissensus there was among the voters. It could be useful to evaluate whether the divergence that we see with the computer rankings is within the range of human disagreement internal to the poll itself.
The usual disclaimers:
For a sense of what the ratings number actually means: