The ratings of previous seasons can be a useful heuristic to see how the ratings play themselves out over the course of the year. I produce two sets of ratings: one "weighted" and one "unweighted."
The only only difference between these two systems is the starting rating of teams at the beginning of the season. In the unweighted ratings, all teams start at the same rating (1500) and can go up or down from there. One possible weakness with this approach is that it doesn't adequately account for the quality of the opposition in the early stages of the year. An alternative is to use some prior assumption to assign preseason ratings to teams, which would presumably increase the accuracy of the predictions. The disadvantage to this is that it could conceivably give an advantage or disadvantage to a team based on their previous year's level of success, which doesn't seem "fair" given the way that we tend to think about each season as discrete from the next. For a bit more on this dilemma and how each of the systems tries to minimize the dangers, read the section in the FAQ on the difference between the weighted and unweighted ratings.
Below are the final ratings from the 2013-14 season. One thing that's noticeable is that once all the ballots are in, there doesn't end up being much of a difference between how the two systems rank teams. There are a few exceptions, but for the most part teams tend to be in very similar locations in the order. The real difference is in the ratings spread. The weighted ratings produce a bit more differentiation between teams, and thus seem to be statistically a bit more accurate in their predictions. This could ultimately use more analysis.
There was no attempt to "fit" the data to the general community consensus about who the "good teams" are. The only fitting done was to optimize the values of the calculation's variables to reduce the error between its predictions and the actual outcomes of future rounds. This optimization is important because the accuracy of the predictions are the basis of the entire system.
One obvious oddity in the ratings is the placement of Houston Lanning & Bockmon, which are far too high for their performance (2 tournaments with doubles losses). The placement is due to a current limitation of the system in how it calculates mixed partnerships. Lanning continued to accrue points over the course of the year even after he stopped debating with Bockmon. Since the ratings are the mean of each individual's rating (and not exclusively just how they have debated together), Houston BL gradually got boosted by the results of Lanning debating with Rajwani. This problem is discussed somewhat in the FAQ, and it is one that bears further examination for a better solution.
The table only includes the top 100 and is sortable. If you have any questions about how the ratings are calculated, visit the FAQ page.
Final 2013-14 Adjusted Ratings