The first set of ratings for 2015-16 are now posted.
Full disclosure: In addition to my work with Concordia, I am also helping Michigan this year. To avoid possible conflict of interest problems, I have made no changes to the ratings algorithm since summer and will also make no changes over the course of this year. This is pretty easy with Glicko style ratings because once you set them going all you have to do is enter new results data as they arrive.
A quick refresher on how the ratings work: Glicko style ratings are determined in a self-correcting relational way based on who a team competes against. If you win, your rating goes up. If you lose, your rating goes down. How much it moves is based on the rating of your opponent. If you beat somebody with a much higher rating, yours will go up a lot. If you beat somebody with a much lower rating, yours might barely move at all. And vice versa for losing. At the beginning of the season, each team starts with the same rating (1500). As results come in, the ratings begin to separate teams by moving teams up and down as they win and lose debates. Since there is little data early on, the ratings are much rougher at the beginning. They gradually become more fine tuned over the course of the season. They need some time to sort themselves out. More data = better. More data also gradually stabilizes a team's rating. At the beginning of the season the ratings are more unstable and react more quickly to change than they do at the end of the season. The numeric value of a team's rating is essentially a predictive value. The difference between two teams' ratings forms the basis of a prediction concerning the outcome of a debate between them. For example, a team with a 200 point advantage is considered to be a 3:1 favorite over their opponent. You can find out the predictions for your own debates by using the prediction calculator.
Comments on where the ratings sit now: