I guess I did not see it when it came out but now I have come across the Leiter ranking (or is an effort to allow others to rank) of the top 40 law schools. As I understand it, 331 respondents ranked 57 law school that are arguably in the top 40. The eventual ranking was then determined by taking each school and seeing how it did in head to head combat with each other school. The most wins gets you number 1 and so on. Yale is overall the winner and Harvard second.
Whether or not I have described the methodology exactly accurately, the most interesting part of the effort is the rankings of each respondent. Leiter picks on FSU a bit for its strategic voting. Rightfully so since FSU is ranked ahead of Yale on 35 ballots. Plus FSU has about 20 votes for the top law school in the land and most of the voters taking that view ranked all other schools as tied for last of 57. Remember, this gets FSU 20 x 56 wins in head to head competition. I can understand the FSU frustration. It is an excellent law school and overlooked but, sadly, their pants are down.
Plus, there are plenty of others with their pants around their ankles. Yale gets about 35 last place votes. In fact, Yale's high ranking is mostly the result last place votes (yes, worse of 57 schools) combining with many very high votes. It goes the other way too. A number of those voting for Yale as number one also rated every other school last. Think about it! You are a Yale grad and so worried about Yale's ranking, that you feel compelled to rank every other school as tied for worst. Harvard has a number of these as well. These voters, far more than FSUs, should look into therapy. There must be a limit to insecurity or a craving for status. Is it a sense of entitlement or are they just girlie men or women who did not get hugged enough? Well, here is a big internet hug so you can do your best not to create the same pathetic behavior in our own kids.
Even this way understates those with their butts hanging out. Miami gets a first place vote, also by someone ranking every other school last. So do Michigan, San Diego and others.
In fact, although I did not count, it appears that the most common ranking given was 57. How does that come about? It happens when someone votes the school he or she teaches at or the school he or she attended first and all other schools tied for last or 57th.
As I said, 331 people voted. I deeply appreciate those instances in which an obviously outside the top 15 school did not get a first place vote. I have no idea what percentage of those voting voted strategically. I did do this. I selected Boston College. I doubt any reasonable person thinks BC is the best or the worse law school of the 57 selected. Thus, I counted the number of last place or first place votes BC got as a rough and very very conservative estimate of the percentage of strategic voters. I get 79+ strategic votes out of 331. Twelve of them are from those who ranked FSU first.
Well, gotta go. I've got many hugs to deliver and my work is just beginning.
Sunday, January 16, 2011
Wednesday, January 12, 2011
Class Participation: How and Why?
I have been rethinking my approach to class participation, and invite your suggestions about how to grade that aspect of student performance, if at all.
Last semester, in Property I, I based 10% of the students' grades on class participation. They won points for class participation in a variety of ways, including serving on review teams, filling out short ungraded quizzes, and signing an "on deck" sheet for Socratic questioning. Despite those many inputs, I still ended up with a very tight cluster of scores, making it difficult to generate a curve that satisfied Chapman's somewhat challenging specs. (My other class, a Law & Economics seminar, raised similar problems.)
I've tried in the past scoring class participation on a more subjective basis, marking the seating chart immediately after class to indicate which students has won class participation points for contributing to discussion of the assigned materials. Although no student ever challenged that system for fairness, it admits the claim all too easily; I prefer more objective measures of performance. Also, I found that scoring students during or after each class, based on some rough measure of "added to class discussion," invited pestering along the lines of, "Did you count my performance, today, Professor Bell? I didn't see you mark the sheet, and you confess to being absent-minded." Fie on that.
I could give up entirely on grading class participation. I don't recall my profs at Chicago keeping track of student participation, after all, unless perhaps for casual dissection in the faculty lounge, and they taught very well. Perhaps I should just stick to exams, and run the risk of teaching to students unprepared for class and unrepentant about their ignorance.
I promised my students that, before I decided how to assess class participation in Property II, I would seek informed advice. If you have some to share, I would welcome hearing it, in the comments below or privately. Thank you.
Last semester, in Property I, I based 10% of the students' grades on class participation. They won points for class participation in a variety of ways, including serving on review teams, filling out short ungraded quizzes, and signing an "on deck" sheet for Socratic questioning. Despite those many inputs, I still ended up with a very tight cluster of scores, making it difficult to generate a curve that satisfied Chapman's somewhat challenging specs. (My other class, a Law & Economics seminar, raised similar problems.)
I've tried in the past scoring class participation on a more subjective basis, marking the seating chart immediately after class to indicate which students has won class participation points for contributing to discussion of the assigned materials. Although no student ever challenged that system for fairness, it admits the claim all too easily; I prefer more objective measures of performance. Also, I found that scoring students during or after each class, based on some rough measure of "added to class discussion," invited pestering along the lines of, "Did you count my performance, today, Professor Bell? I didn't see you mark the sheet, and you confess to being absent-minded." Fie on that.
I could give up entirely on grading class participation. I don't recall my profs at Chicago keeping track of student participation, after all, unless perhaps for casual dissection in the faculty lounge, and they taught very well. Perhaps I should just stick to exams, and run the risk of teaching to students unprepared for class and unrepentant about their ignorance.
I promised my students that, before I decided how to assess class participation in Property II, I would seek informed advice. If you have some to share, I would welcome hearing it, in the comments below or privately. Thank you.
Subscribe to:
Comments (Atom)