Once I had all the ballots formatted to run through the ratings calculation, it wasn't much work to pull out speaker point data for the year. I thought I'd share it. Even with tournaments turning more to versions of z-scores, it seems like it would be useful to be aware of speaker point trends.
I had to take out zeros because of the way that my dataset counts points in elim rounds as zeroes. While I was at it, I figured that I would limit the range even further to the set of points that are more typical. While there are certainly some instances in which judges give points less than 27, it's overall quite rare and is probably not indicative of overall trends.
As a point of comparison, I though it might also be useful to see the point distributions of exclusively elimination round participants.
Here's the distribution of elimination round participants only at major national tournaments. I defined a "major" as any tournament that has upwards of about 75 teams.
I admit I got a little graphics happy. Here's a plot that directly compares the three:
And finally, a boxplot. Though boxplots are kind of clunky, I think that this graphic is actually kind of useful for concretely seeing the difference between the two sets of point distributions. To read the boxplot:
After looking at the data, I would emphasize a couple of thoughts to consider regarding the assignment of speaker points:
1. 28.5 is the median for all points. About 9% of speeches are better than a 29, and about 12% are worse than a 28. 2. 28.7 is the median for elim participants. About 10% are better than 29.2, and about 11% are worse than 28.3 3. 28.8 is the median for elim participants at a major national tournament. About 10% are better than a 29.3, and about 9% are worse than a 28.4 3. If you assign somebody points below around a 28, you're basically saying that they have no place in elimination rounds. In all reality, the cutoff for this is probably higher, especially at a large national tournament.
0 Comments
The final 2014-15 ratings are now posted. There are some ideas that I plan on coming back to over the next weeks/months, but for now I post them without much of any commentary except some logistical notes about procedures:
1. I made the decision to include a broader range of tournaments than I had previously. The cutoff for tournament size had been approximately 20 teams. I expanded it to include tournaments that had as few as about 10 teams. It is important to remember that not all rounds from each tournament are included. In a previous post, I explained how eigenvector centralities are used to determine which teams' results will be calculated. In short, if either team in a round has an EV value of less than 0.15, then the result of that round will not be counted. So while the results from a small tournament like West Virginia are included, about half of the rounds are not counted because many of the participating teams lack enough reliable data to produce a rating. 2. Because of the inclusion of a broader set of tournaments, which in turn influenced every team's EV values, there is some small discrepancy between how the newest ratings calculate the regular season (pre-districts) ratings versus the ratings that I actually posted after Texas. The difference is small, but could influence rankings where the margin between teams was also small. This only matters if you're looking at the "Previous" or "Change" elements of the table. 3. The D8 tournament was excluded from the data set because I haven't yet had time to figure out what to do with 2 judge panels. |