College Debate Ratings
  • Team Ratings
    • 2020-21
    • 2019-20
    • 2018-19
    • 2017-18
    • 2016-17
    • 2015-16
    • 2014-15
    • 2013-14
    • 2012-13
    • 2011-12
    • 2010-11
    • 2009-10
  • FAQ
  • Opt Out

2013-14 First Round Bids

11/6/2014

0 Comments

 
One of my hopes when putting together these ratings is that they could be a help in the selection of various awards or recognitions, in particular the selection of at-large bids for the NDT.  Assessing the validity of the ratings presents an interesting dilemma because the only real external source of validation is the intersubjective consensus of the community.  While it may be possible that the ballots of the voters in the at-large process may not be identical to the community consensus, they are nevertheless likely indicative of the consensus of those who hold a certain amount of institutional and social power.

The table below shows what the "ballot" produced by the weighted and unweighted ratings would have been for the 2013-14 First Round at-large bids.  Again, it's important to note that no effort was made to "fit" the results to match the coaches' ballots.  To the extent that any fitting has been done, it was exclusively to optimize how well the ratings predicted actual round results.

It's interesting to note that both ratings systems would have produced almost the exact same set of bids as the actual voters did.  The only point of disagreement is that the ratings didn't like Kansas BC quite as much as the voters.  This is pretty remarkable, especially when emphasizing that Kansas was the 16th and final bid. 

The ratings would have prefered a couple of teams before Kansas, including Harvard HX (who was ineligible), Oklahoma LM, and Minnesota CE.  However, it should be noted that in the raw rating score Kansas, Oklahoma, and Minnesota were virtually identical with only a couple of points separating one another (OU and UM are even tied in one).  It would be interesting to go back and examine each team's results more closely.  In broad strokes, I can see why KU and OU would be so close to one another.  There are few major differences in their performances.  KU made it to finals of UMKC whereas OU attended the Kentucky RR.  OU didn't break at Harvard, but KU didn't break at Wake.  KU made it a little further at Fullerton, but OU made it a little further at Texas.  The bigger suprise is the presence of Minnesota, who regularly struggled in early elims.  However, they did break at every tournament.  The biggest piece in their favor though is probably their performance at the Pittsburgh RR, where they substantially outshone KU and OU.  Without going back to dig into the data, I suspect that it was at Pittsburgh that Minnesota got boosted back into the conversation.

Name Voters Weighted Unweighted
Northwestern: Miles & Vellayappan 1 1 1
Harvard: Bolman & Suo 2 4 4
Georgetown: Arsht & Markoff 3 3 3
Michigan: Allen & Pappas 4 2 2
Rutgers-Newark: Randall & Smith 5 6 6
Mary Washington: Pacheco & McElhinny 6 7 7
Wake Forest: LeDuc & Washington 7 5 5
Wake Forest: Quinn & Min 8 9 14
Oklahoma: Campbell & Lee 9 10 9
Towson: Ruffin & Johnson 10 14 13
California, Berkeley: Spurlock & Muppalla 11 13 15
Georgetown: Engler & McCoy 12 8 8
Harvard: Taylor & Dimitrijevic 13 11 11
Michigan State: Ramesh & Thur 14 12 10
West Georgia: Muhammad & Ard 15 15 12
Kansas: Campbell & Birzer 16 19 19
Oklahoma: Leonardi & Masterson 17 17 18
Texas: Makuch & Fitz 18 23 22
Minnesota: Crunkilton & Ehrlich 19 18 17
Harvard: Herman & Xu 20 16 16
Emory: Sigalos & Jones 21 21 21
Kentucky: Grasse & Roman 22 22 23
Wayne State: Leap & Messina 23 20 20
Wayne State: Justice & Slaw 24 24 24

Out of curiosity, I compared the ratings to each voter's ballot.  The voter whose ballot most resembled the weighted ratings was Will Repko, the difference between them only being on average (mean) 1.25 spots.  The voter who most resembled the unweighted ratings was Dallas Perkins, with an average difference of 1.75 spots.  To put those numbers into a little bit of perspective, Dallas's average difference from Repko was 2.33 spots.

Also, the weighted ratings were slightly more aligned with the overall preferences of the voters.  The average devation of the weighted ratings from voter preferences was 2.1 spots, whereas the average deviation of the unweighted ratings was 2.3 spots.  I suspect that a big part of that difference can be found with the significant difference between how the two ratings evaluated Wake MQ.  The unweighted version was not very friendly to them (a huge factor being the difference in how the weighted ratings evaluated the quality of their opponents at the Kentucky tournaments).
0 Comments



Leave a Reply.

    RSS Feed

    Archives

    December 2018
    February 2018
    November 2017
    July 2017
    April 2017
    February 2017
    January 2017
    December 2016
    October 2016
    June 2016
    April 2016
    February 2016
    January 2016
    November 2015
    October 2015
    September 2015
    April 2015
    February 2015
    January 2015
    November 2014

Proudly powered by Weebly
  • Team Ratings
    • 2020-21
    • 2019-20
    • 2018-19
    • 2017-18
    • 2016-17
    • 2015-16
    • 2014-15
    • 2013-14
    • 2012-13
    • 2011-12
    • 2010-11
    • 2009-10
  • FAQ
  • Opt Out