Wednesday, December 9, 2009
"Exploding Offers"
Complaints about offers with deadlines perceived as short often ignore the realities facing non-top-tier schools and candidates.
Early in the hiring season, top-tier candidates begin getting offers from mid- to upper-level schools. Often, such schools have a ranked list of candidates to whom their deans are authorized to make offers. If a school's faculty has authorized the dean to make offers to, say, eight candidates to fill two slots, two offers will generally go out -- three if the dean has some financial flexibility and could live with the unexpected good luck of three acceptances.
Those slots are then out of play for the rest of the candidate field until the initial offerees make up their minds. The other six candidates the faculty has approved must sit around wondering what will happen next and when. They may face pressures from other schools lower on their preference list. They may need to begin planning to move their families. Too bad. They must wait until the initial offerees run the clock out -- as often happens.
Much of the commentary I've read about deadlines focuses on the needs of initial offerees -- typically the most highly credentialed candidates. But such candidates constitute only a small part of the entry-level pool.
Two practices create the problem to which expiring offers are a solution.
First, highly credentialed candidates commonly stockpile offers. Second, top law schools often expect candidates to wait around until mid- to late spring. Both practices inconvenience everyone else. The rest of us need the stockpiled slots released as quickly as possible so everyone else can get on with the process of finding an academic home. Short deadlines unclog the system.
Focusing solely on the "evil" of short deadlines assumes that stockpiling by highly credentialed candidates and mid- to late spring offers from top schools are themselves unproblematic. Even the language commonly used is loaded. Calling offers with short deadlines "exploding offers" is a lot like calling the estate tax the "death tax" -- it presupposes a particular normative outcome.
The simplest solution is already within candidates' control. Candidates who are concerned about short deadlines should ask about the offer policies of the back-up schools at which they are interviewing. They shouldn't interview at back-up schools to whose policies they object, take up offer slots that other candidates really want, and then complain about the deadlines. If enough top-tier candidates were to use as back-ups only schools willing to leave offers open for extended periods, such schools would presumably get more and better candidates. Schools that wanted to finish their entry-level hiring expeditiously wouldn't find their offer slots clogged by candidates who don't really want to teach there anyhow. Ultimately, the complaint about short deadlines is an assertion that all schools should be willing to serve as back-ups -- a premise with which one can reasonably disagree.
I agree that hardball tactics for the purpose of putting a candidate in an awkward position are reprehensible and counterproductive. But the issue of hardball tactics is analytically distinct from that of offer deadlines. The fact that some deans misuse offer deadlines does not mean that such deadlines -- even if short -- are themselves illegitimate. The contrary is in fact true: deadlines make the system work.
Early in the hiring season, top-tier candidates begin getting offers from mid- to upper-level schools. Often, such schools have a ranked list of candidates to whom their deans are authorized to make offers. If a school's faculty has authorized the dean to make offers to, say, eight candidates to fill two slots, two offers will generally go out -- three if the dean has some financial flexibility and could live with the unexpected good luck of three acceptances.
Those slots are then out of play for the rest of the candidate field until the initial offerees make up their minds. The other six candidates the faculty has approved must sit around wondering what will happen next and when. They may face pressures from other schools lower on their preference list. They may need to begin planning to move their families. Too bad. They must wait until the initial offerees run the clock out -- as often happens.
Much of the commentary I've read about deadlines focuses on the needs of initial offerees -- typically the most highly credentialed candidates. But such candidates constitute only a small part of the entry-level pool.
Two practices create the problem to which expiring offers are a solution.
First, highly credentialed candidates commonly stockpile offers. Second, top law schools often expect candidates to wait around until mid- to late spring. Both practices inconvenience everyone else. The rest of us need the stockpiled slots released as quickly as possible so everyone else can get on with the process of finding an academic home. Short deadlines unclog the system.
Focusing solely on the "evil" of short deadlines assumes that stockpiling by highly credentialed candidates and mid- to late spring offers from top schools are themselves unproblematic. Even the language commonly used is loaded. Calling offers with short deadlines "exploding offers" is a lot like calling the estate tax the "death tax" -- it presupposes a particular normative outcome.
The simplest solution is already within candidates' control. Candidates who are concerned about short deadlines should ask about the offer policies of the back-up schools at which they are interviewing. They shouldn't interview at back-up schools to whose policies they object, take up offer slots that other candidates really want, and then complain about the deadlines. If enough top-tier candidates were to use as back-ups only schools willing to leave offers open for extended periods, such schools would presumably get more and better candidates. Schools that wanted to finish their entry-level hiring expeditiously wouldn't find their offer slots clogged by candidates who don't really want to teach there anyhow. Ultimately, the complaint about short deadlines is an assertion that all schools should be willing to serve as back-ups -- a premise with which one can reasonably disagree.
I agree that hardball tactics for the purpose of putting a candidate in an awkward position are reprehensible and counterproductive. But the issue of hardball tactics is analytically distinct from that of offer deadlines. The fact that some deans misuse offer deadlines does not mean that such deadlines -- even if short -- are themselves illegitimate. The contrary is in fact true: deadlines make the system work.
Friday, December 4, 2009
Hey Harvard
I read that Harvard has abandoned its program that waived tuition in the third year for students committing to five years of public interest work.
I appears that economic hardship required the change but the Harvard President is also quoted as saying they did not know how easy it would be to get Harvard students to go into public interest work.
On the other hand the Harvard Crimson reports:
"This year, 58 third-year students signed up for the initiative, which has a budget of $3 million per year for a five-year period ending in 2012, . . . About 50 to 60 students entered public service after graduation in previous years before the start of the tuition waiver."
If I am reading the numbers correctly it was a program that had little or no impact on the number Harvard grads opting for public interest work. So, what amounted to a $40,000 payment or an $8,000 a year bump to the public service salary appears to have been unpersuasive. Even by putting a $40,000 thumb on the scale, Harvard evidently could not compete with the big firms and the starting salaries for its grads.
I have and idea for every school that receives applications for qualified candidates in excess the spots available and wants students to "explore" (in the words of Harvard's President) the possibility of public interest work. But be careful what you wish for and do this only if you are serious. Don't reduce tuition. In fact, you might raise it for those with well-heeled moms and dads and even for those so desperate to go to to your school that for them no debt is too great. Just make 5 years of public interest work a condition of admission.
I appears that economic hardship required the change but the Harvard President is also quoted as saying they did not know how easy it would be to get Harvard students to go into public interest work.
On the other hand the Harvard Crimson reports:
"This year, 58 third-year students signed up for the initiative, which has a budget of $3 million per year for a five-year period ending in 2012, . . . About 50 to 60 students entered public service after graduation in previous years before the start of the tuition waiver."
If I am reading the numbers correctly it was a program that had little or no impact on the number Harvard grads opting for public interest work. So, what amounted to a $40,000 payment or an $8,000 a year bump to the public service salary appears to have been unpersuasive. Even by putting a $40,000 thumb on the scale, Harvard evidently could not compete with the big firms and the starting salaries for its grads.
I have and idea for every school that receives applications for qualified candidates in excess the spots available and wants students to "explore" (in the words of Harvard's President) the possibility of public interest work. But be careful what you wish for and do this only if you are serious. Don't reduce tuition. In fact, you might raise it for those with well-heeled moms and dads and even for those so desperate to go to to your school that for them no debt is too great. Just make 5 years of public interest work a condition of admission.
Wednesday, December 2, 2009
Big Law in Los Angeles
Top 20 Suppliers of Partners to the
Ten Largest Law Firms in Los Angeles
Over the Most Recent 25 and 10 Year Periods
Most Recent 25 Years Most Recent 10 Years
Loyola-L.A. 51 9
UCLA 51 7
Harvard 48 8
USC 38 6
Boalt 31 5
Southwestern 26 6
Stanford 19 1
Hastings 18 2
Columbia 16 1
Georgetown 15 1
NYU 15 3
Yale 12 1
Chicago 10 1
San Diego 10 1
BU 8 0
Pepperdine 8 0
Santa Clara 8 0
Boston College 7 2
Michigan 7 0
Virginia 7 1
Methodology
Number of partners in the 10 largest law firms in Los Angeles (Los Angeles County
offices only) admitted to the bar in 1984 or thereafter or 1999 or thereafter,
respectively. Year of admission to the bar is used as a proxy for year of
graduation. 10 largest law firms ranked by number of attorneys in L.A. County
offices per 2009 Los Angeles Business Journal Book of Lists. Search performed in
Martindale Hubbell on-line 11/27-30/2009. [I apologize for the ragged appearance
of the table. Tables are very difficult to create on this platform.]
Monday, November 30, 2009
Wednesday, November 18, 2009
Wednesday, October 21, 2009
2010 Princeton Review Law School Rankings

Over on TaxProf Blog, I have published a series of rankings based on data extracted from the individual profiles of the 172 law schools in the 2010 edition of the Princeton Review's Best 172 Law Schools (with the University of Cincinnati College of Law on the cover). The rankings are based on a survey of 18,000 students at the 172 law schools, along with school statistics provided by administrators.
Saturday, October 10, 2009
Blind Salarying
At my school we blind grade which does not mean we cannot see the papers but that we do not know whose they are. The idea is that you might be inclined -- consciously or unconsciously to grade some agreeable people higher and others lower. And then there is the halo effect that may influence the grade you give someone who was really great in class but did not do so well on the exam.
If you think law school Deans are unaffected by personal views and halo effects, you are asking too much and I have some swamp land for sale in Forida. Thus, shouldn't law schools consider blind salarying? There is a difference, though, between deans and graders. Deans are closer to elected officials than most other professionals I know of. For elected officials the first priority is to do what is necessary to keep the job or, in deanspeak, not have a "failed deanship." In the "what is necessary" department I have seen some doozies including the world record one that was in this very sentence until my better judgment, in one of its rare appearances, said "Don't do it."
Blind salarying would mean salaries would be based on an objective assessment of productivity. I don't think that could be achieved by blotting off the names on yearly reports because Deans -- unlike faculty grading papers -- will know who did what. So, the blind grading should be done by a third party -- say a special committee of the AALS that analyzes faculty performance from each school each year and files a report -- almost like a big arbitration but there are are no "sides."
I fear some readers may not know that I realize this is unworkable. In the last AALS listing that included a category for Objective Law Professors there were only 27 entries and that was in 1955. Can you imagine what would happen today with blind salarying. The quality of work would depend largely on whether the reviewer agreed (as it probably did in 1955).
Aside from the objectivity matter, how would we define productivity? Here this a little more hope because we could at least agree on what it is not. There would be no correlation between salary and:
1. Unquestioning loyalty to the dean whether in the form of formal membership in the administration or cheer leading.
2. Threats to leave when one has tried and can not scare up an offer.
3. Threats to leave that the dean feels would make him or her look bad. This is very different from a departure that would actually damage the law school.
4. Never having uttered a public word in opposition to the dean.
5. Whining, butt-kissing and office visits to the dean. In fact salaries would be inversely related to the amount of time in the dean's office or on the phone with the dean.
6. Ingratiating efforts in the form of "advising" the dean on what is really going on with the faculty that she should know about not because she needs to know but because you want her to know you are on her side. Yes, I am talking about the self-appointed confidants.
7. Complaining about how overworked you are. On this I have a story. One semester a few first year teachers were asked to teach two 4 credit sections of the same course in the same semester. So, 8 hours in the semester. I did it and and I have to confess it was the easiest teaching load I ever hand. Eight of my 9 hour yearly teaching load (actually 10 that year) was taken care of with one preparation that I had done for years. The howling in the halls from others was deafening and you can bet the Dean was reminded every week of how they were going beyond what is expected. Of course, maybe they were craftier than I think and they were pulling the old "briar patch" trick.
8. Race, gender or sexual preference.
Maybe I have this all wrong and what we need is not blind salarying but X ray salarying. Here the dean would be required not simply to assess what the faculty member does that is obvious but what good and bad things actually go on. Is the faculty member a constant source of stress by virtue of gossip, exaggerations, and unwanted office visits? Very often the ingratiator is also a stress producer because he is so self absorbed he is not content to let the teaching and writing speak for itself.
Friday, September 18, 2009
Are We Worse than Thieves? What Rents are Law Professors and Law Schools Seeking?
In his classic 1967 article on rent-seeking (which does not actually use the term because it had not been coined at that time) Gordon Tullock explained that the cost of theft was not that one person's property was taken by another. In fact, that transaction in isolation may increase welfare. The social costs were the reactions of those attempting to avoid theft and those refining their skills. Richard Posner extended the analysis when he wrote about the costs of monopoly. Again, it was not that some became richer at the expense of others but that enormous sums were invested in bringing about the redistribution. In neither case do the rent seeking, social-cost-producing efforts create new wealth.
Still, in the case of Tullock and Posner the social costs were at least about something. There was a "there" there in the form of a chunk of wealth to bicker over. But now we come to law professors and law schools.
Law professor efforts to self-promote have exploded. Included are repeated visits to the Dean asking for one thing or another, resume padding, massive mailings of reprints, posting SSRN download rankings, or, even better, emailing 200 friends asking them to download a recently posted article, churning out small symposia articles because deans often want to see lines on resumes as opposed to substance, playing the law review placement game, and just plain old smoozing ranging from name dropping to butt kissing. Very little of this seems designed to produce new wealth. If fact, think of the actual welfare-producing activities that could be undertaken with the same levels of energy -- smaller classes, more sections of needed courses, possibly even research into areas that are risky in terms of self promotion but could pay off big if something new or insightful were discovered or said. But this is the part that puzzles me. Whether the thief in Tullock's case or monopolist in Posner's, the prize is clear. What is the prize for law professors? Are these social costs expended to acquire rents that really do not exist or are only imagined? What are the rents law professors seek?
Law schools make the professors look like small potatoes when it comes to social costs. Aside from hiring their own graduates to up the employment level, they all employ squads of people whose jobs are to create social costs (of course, most lawyers do the same thing), produce huge glossy magazines that go straight to the trash, weasel around with who is a first year student as opposed to a transfer student or a part time student, select students with an eye to increasing one rating or another, and obsess over which stone is yet unturned in an effort to move up a notch. I don't need to go through the whole list but the point is that there is no production -- nothing socially beneficial happens. That's fine. The same is true of Tullock's thief and Posner's monopolist. But again, and here is the rub. What is the rent the law schools seek? Where is the pie that they are less interested in making bigger than in just assuring they get the biggest slice possible? What is it made of?
At least thieves and monopolists fight over something that exists. And they often internalize the cost of that effort. Law professors and law schools, on the other hand, may be worse. They do not know what the prize actually is; they just know they should want more; and the costs are internalized by others.
Still, in the case of Tullock and Posner the social costs were at least about something. There was a "there" there in the form of a chunk of wealth to bicker over. But now we come to law professors and law schools.
Law professor efforts to self-promote have exploded. Included are repeated visits to the Dean asking for one thing or another, resume padding, massive mailings of reprints, posting SSRN download rankings, or, even better, emailing 200 friends asking them to download a recently posted article, churning out small symposia articles because deans often want to see lines on resumes as opposed to substance, playing the law review placement game, and just plain old smoozing ranging from name dropping to butt kissing. Very little of this seems designed to produce new wealth. If fact, think of the actual welfare-producing activities that could be undertaken with the same levels of energy -- smaller classes, more sections of needed courses, possibly even research into areas that are risky in terms of self promotion but could pay off big if something new or insightful were discovered or said. But this is the part that puzzles me. Whether the thief in Tullock's case or monopolist in Posner's, the prize is clear. What is the prize for law professors? Are these social costs expended to acquire rents that really do not exist or are only imagined? What are the rents law professors seek?
Law schools make the professors look like small potatoes when it comes to social costs. Aside from hiring their own graduates to up the employment level, they all employ squads of people whose jobs are to create social costs (of course, most lawyers do the same thing), produce huge glossy magazines that go straight to the trash, weasel around with who is a first year student as opposed to a transfer student or a part time student, select students with an eye to increasing one rating or another, and obsess over which stone is yet unturned in an effort to move up a notch. I don't need to go through the whole list but the point is that there is no production -- nothing socially beneficial happens. That's fine. The same is true of Tullock's thief and Posner's monopolist. But again, and here is the rub. What is the rent the law schools seek? Where is the pie that they are less interested in making bigger than in just assuring they get the biggest slice possible? What is it made of?
At least thieves and monopolists fight over something that exists. And they often internalize the cost of that effort. Law professors and law schools, on the other hand, may be worse. They do not know what the prize actually is; they just know they should want more; and the costs are internalized by others.
Huh?
This is more properly a comment but since Moneylaw is close to dormant I decided to upgrade to an actual post. I read with interest the most recent posts about tax faculty rankings. I did this even though I have complained about drawing any inferences from the rankings other than SSRN may be pretty good at counting.
Beyond my usual concerns about the emails we all get that we have made the top 10 in one of SSRN's zillions of categories and its use of our works to sell advertising, I am also concerned about what those who post the lists believe they are communicating. I do not mean to pick just on the most recent tax listings because I have seen this with other listings.
I see two problems but maybe I am misunderstanding. As I understand it, a tax professor with, say, 10,000 downloads may have written a couple of tax articles that were moderately downloaded and then have 8000 downloads in other areas. In effect, the number of downloads, if it means anything, does not mean how widely downloaded (much less read or relied on) that author was as a tax professor. If you doubt this take a look at the downloads for the top two tax professors and see how many of the articles are actually tax articles. It would be possible to write one article on tax that was downloaded once and be ranking as at the top and, in fact, pull the entire tax department with you. I seems to me that any school wishing to move up could just ask its most downloaded scholar in any field to allow him or herself to be listed as a tax professor and added as a coauthor to one article. Am I wrong on this? By the way this is the charitable interpretation because I cannot tell whether to be considered one has to be a self-professed tax professor and have uploaded a tax article in the past year or just pass one of these tests. If it is the latter, any inferences to be drawn are even more sketchy.
The second problem is with the totals for schools. Isn't this somehow influenced by the size of the school and the number of people there who teach tax? Why not take the downloads of actual tax articles and divide by the number of tax faculty. And, of course, even this leaves out other types of works.
My sense is that if these SSRN rankings were subject to some kind of truth in advertising standards they would be found to be misleading because they seem to have so little to do with the actual tax productivity of a tax faculty or even the interest others have in that faculty's output. And, if the thought that goes into these postings were found in a scholarly article I doubt it would be publishable. In fact, the only place I have seen a similar willingness to stray from what would be acceptable care as a scholar is when academics perform as expert witnesses.
Beyond my usual concerns about the emails we all get that we have made the top 10 in one of SSRN's zillions of categories and its use of our works to sell advertising, I am also concerned about what those who post the lists believe they are communicating. I do not mean to pick just on the most recent tax listings because I have seen this with other listings.
I see two problems but maybe I am misunderstanding. As I understand it, a tax professor with, say, 10,000 downloads may have written a couple of tax articles that were moderately downloaded and then have 8000 downloads in other areas. In effect, the number of downloads, if it means anything, does not mean how widely downloaded (much less read or relied on) that author was as a tax professor. If you doubt this take a look at the downloads for the top two tax professors and see how many of the articles are actually tax articles. It would be possible to write one article on tax that was downloaded once and be ranking as at the top and, in fact, pull the entire tax department with you. I seems to me that any school wishing to move up could just ask its most downloaded scholar in any field to allow him or herself to be listed as a tax professor and added as a coauthor to one article. Am I wrong on this? By the way this is the charitable interpretation because I cannot tell whether to be considered one has to be a self-professed tax professor and have uploaded a tax article in the past year or just pass one of these tests. If it is the latter, any inferences to be drawn are even more sketchy.
The second problem is with the totals for schools. Isn't this somehow influenced by the size of the school and the number of people there who teach tax? Why not take the downloads of actual tax articles and divide by the number of tax faculty. And, of course, even this leaves out other types of works.
My sense is that if these SSRN rankings were subject to some kind of truth in advertising standards they would be found to be misleading because they seem to have so little to do with the actual tax productivity of a tax faculty or even the interest others have in that faculty's output. And, if the thought that goes into these postings were found in a scholarly article I doubt it would be publishable. In fact, the only place I have seen a similar willingness to stray from what would be acceptable care as a scholar is when academics perform as expert witnesses.
Thursday, September 17, 2009
New Tax Faculty Rankings
- Graduate Tax Faculty Rankings (Michigan is #1)
- Tax Professor Rankings (Louis Kaplow & Reuven Avi-Yonah are #1)
- Tax Faculty Rankings (Michigan is #1)
- Tax Faculty Metropolitan Area Rankings (Los Angeles is #1)
Monday, September 7, 2009
66% of the Time, Every Time
When I began teaching economics something struck me during the first week. I knew a fair amount about economics -- much less than I thought -- but I had received not even a minute's worth of instruction on teaching. All I could think to do was read the book, more or less explain it in my own words using examples not in the book, and answer questions. There were no war stories for a first year teacher of microeconomic theory. One thing that gradually occurred to me is that a knowledge of economics, and then later of law, only accounted for about 66% of what I did as a teacher. And it also occurred me that while students see the professor while he or she is teaching, they only witness about 66% of what goes into teaching.
Other courses, common sense, and day to day experiences inform teaching yet their importance remains behind the scenes. One of the most useful courses I took was a required freshman level course in logic. I am not sure it is required or even offered any more but it did mean that I do not confuse causation and correlation. It also meant that I do my best to correct students who reason like this: "The professor does not need to take role because I attend regularly" Bizarre, right? But I have heard the very same "reasoning" from law professors. For example, "There is no need to have a rule requiring professors to take role because I already take role." I assume professors finding this acceptable also find it acceptable in class.
The meaning of a normal distribution also came up and can be understood in the context of reasoning I have heard twice lately: "My method of testing is valid because it produced a normal distribution." I most recently heard this from someone administering a law exam to people with widely varying knowledge of English. The normal distribution means nothing about the validity of the test. My guess is that what she was testing was the ability to understand English. The normal distribution fixation is particularly odd. If the students in the class are normally distributed then, hopefully, the test result will reflect that. On the other hand, getting a normal distribution does not mean the same is true of the class itself. In fact, a normal distribution could just as easily cause concern about the test. Normal distributions are, however, convenient when grades must be assigned.
And now back to logic. Remember your high school math classes. Some teachers said to show your work and then gave you credit if you got everything thing right except, say, the final step. Others just machine graded.The problem is this. In most complex math problems there are many ways to get a wrong answer. Some reveal that the test taker did not have a clue. Some reveal that the test taker forgot to carry the one on the last step. The machine grader gives them the same credit although their knowledge and understanding are quite different. The teacher who requires the student to show his or her work makes a distinction because there is a distinction. Of course, the same is true in law where the issues are not simply complex but more nuanced.
This also relates to the point that students see only about 66% of what goes into teaching. Suppose you give a machine graded exam and there are 10 reasons that could explain a wrong answer. If most of the students are getting it wrong for the same reason, it suggests an opportunity to improve one's teaching the next term. (Unless, of course, the goal is not really to teach but to get a good distribution.) I assume the machine graded test givers just plow along without pin pointing the problem which may reflect their teaching as much as student diligence.
The all time prize for irrational testing actually goes to essay test givers who say something like "Answer 3 of the next 5 questions." There are many combinations of 3 out of 5 and each one represents a different test. In addition, a student could get an 80 of 100 on all five and do worse than a student who scores and 85 on three but would have scored a 60 on the other two. Pretty simple, right? This is, however, popular with the students and you know where that can lead.
This also relates to the point that students see only about 66% of what goes into teaching. Suppose you give a machine graded exam and there are 10 reasons that could explain a wrong answer. If most of the students are getting it wrong for the same reason, it suggests an opportunity to improve one's teaching the next term. (Unless, of course, the goal is not really to teach but to get a good distribution.) I assume the machine graded test givers just plow along without pin pointing the problem which may reflect their teaching as much as student diligence.
The all time prize for irrational testing actually goes to essay test givers who say something like "Answer 3 of the next 5 questions." There are many combinations of 3 out of 5 and each one represents a different test. In addition, a student could get an 80 of 100 on all five and do worse than a student who scores and 85 on three but would have scored a 60 on the other two. Pretty simple, right? This is, however, popular with the students and you know where that can lead.
I would not want to confuse causation and correlation but there is pattern. All of the reasoning that, at least to me, seems in error does make the lives of those making the errors easier. Could it be that reasoning is driven by convenience and self-interest?
Sunday, August 30, 2009
How Top-Ranked Law Schools Got That Way, Pt. 3
Part one and part two of this series focused on the top law schools in U.S. News and World Report's 2010 rankings, offering graphs and analysis to explain why those schools did so well. This part rounds out the series by way of contrast. Here, we focus on the law schools that ranked 41-51 in the most recent USN&WR rankings, those that ranked 94-100, and the eight schools that filled out the bottom of the rankings.

The above chart shows the weighted and itemized z-scores of law schools about 1/3rd of the way from the top of the 2010 USN&WR rankings. Note the sharp downward jog at Over$/Stu—a residual effect, perhaps, of the stupendously large Over$/Stu numbers we earlier saw among the very top schools. Note, too, that three schools here—GMU, BYU, and American U.—buck the prevailing trend by earning lower scores under PeerRep than under BarRep (GMU's line hides behind BYU's). As you work down from the top of the rankings, GMU offers the first instance of that sort of inversion; all of the more highly ranked schools have larger itemized z-scores for PeerRep than for BarRep. It raises an interesting question; Why did lawyers and judges rank those schools so much more highly than fellow academics did?

The above chart shows the weighted, itemized z-scores of the law schools ranked 94-100 in the 2010 USN&WR rankings—about the middle of all of the 182 schools in the rankings. As we might have expected, the lines bounce around more wildly on the left, where they trace the impact of the more heavily weighted z-scores, than on the right, where z-scores matter relatively little, pro or con. Beyond that, however, no one pattern characterizes schools in this range.

The above chart shows the weighted and itemized z-scores of law schools that probably did the worst in the 2010 USN&WR rankings. I say, "probably," because USN&WR does not reveal the scores of schools in the bottom two tiers of its rankings; these eight schools did the worst in my model of the rankings. Given that uncertainty, as well as for reasons explained elsewhere, I decline to name these schools.
Here, as with the schools at the very top of the rankings, we see a relatively uniform set of lines. All of the lines trend upward, of course. These schools did badly in the rankings exactly because they earned strongly negative z-scores in the most heavily weighted categories, displayed to the left. Several of these schools did very badly on the Emp9 measure, and one had a materially poor BarPass score. Another of them did surprisingly well on Over$/Stu, perhaps demonstrating that, while the very top schools boasted very high Over$/Stu scores, no amount of expenditures-per-student can salvage otherwise dismal z-scores.
[Crossposted at Agoraphilia, MoneyLaw.]
The above chart shows the weighted and itemized z-scores of law schools about 1/3rd of the way from the top of the 2010 USN&WR rankings. Note the sharp downward jog at Over$/Stu—a residual effect, perhaps, of the stupendously large Over$/Stu numbers we earlier saw among the very top schools. Note, too, that three schools here—GMU, BYU, and American U.—buck the prevailing trend by earning lower scores under PeerRep than under BarRep (GMU's line hides behind BYU's). As you work down from the top of the rankings, GMU offers the first instance of that sort of inversion; all of the more highly ranked schools have larger itemized z-scores for PeerRep than for BarRep. It raises an interesting question; Why did lawyers and judges rank those schools so much more highly than fellow academics did?
The above chart shows the weighted, itemized z-scores of the law schools ranked 94-100 in the 2010 USN&WR rankings—about the middle of all of the 182 schools in the rankings. As we might have expected, the lines bounce around more wildly on the left, where they trace the impact of the more heavily weighted z-scores, than on the right, where z-scores matter relatively little, pro or con. Beyond that, however, no one pattern characterizes schools in this range.
The above chart shows the weighted and itemized z-scores of law schools that probably did the worst in the 2010 USN&WR rankings. I say, "probably," because USN&WR does not reveal the scores of schools in the bottom two tiers of its rankings; these eight schools did the worst in my model of the rankings. Given that uncertainty, as well as for reasons explained elsewhere, I decline to name these schools.
Here, as with the schools at the very top of the rankings, we see a relatively uniform set of lines. All of the lines trend upward, of course. These schools did badly in the rankings exactly because they earned strongly negative z-scores in the most heavily weighted categories, displayed to the left. Several of these schools did very badly on the Emp9 measure, and one had a materially poor BarPass score. Another of them did surprisingly well on Over$/Stu, perhaps demonstrating that, while the very top schools boasted very high Over$/Stu scores, no amount of expenditures-per-student can salvage otherwise dismal z-scores.
[Crossposted at Agoraphilia, MoneyLaw.]
Thursday, August 27, 2009
Best Value Law Schools
For a chart of the Top 25 Value Law Schools, and a chart of the Top 65 Value Law Schools with their corresponding U.S. News rank, see here.The National Jurist identified 65 law schools that carry a low price tag and are able to prepare their students incredibly well for today's competitive job market. In determining what makes a law school a "best value," we first looked at tuition, considering only public schools with an in-state tuition less than $25,000, and private schools with an annual tuition that comes in under $30,000. We then narrowed the playing field again by including only schools that had an employment rate of at least 85% and a school bar psasage rate that was higher than their state average. We then ranked schools, giving greatest weight to tuition, followed closely by employment statistics.
Wednesday, August 26, 2009
"Annual" Multiple Choice Testing Post
It's been nearly two years since my "annual" post opposing multiple choice examinations for law students. The last one generated some good comments and can be found here. I still find the question intriguing. Before going on a bit, some basics. First, I am writing about machine graded exams; not multiple choice or true/false with explanation questions which are actually short essays that focus the students on specific topics. Second, I am not really writing about the mixed exam in which some "objective" (what a crazy thing to call them) questions are included with the essays. Third, I sincerely want the multiple choice machine graded (MCMG) supporters to be right. I hate grading more than anything else associated with my job. Finally, I think the whole matter presents a wonderful opportunity to examine self governance. More specifically, has anyone actually studied the effects of MCMG exams as opposed to essay exams or is the trend toward MCMG exams strictly a matter of convenience?Here is what I like to know: If you use MCMG aren't you teaching a different course than if you use essay exams. I am not saying the teacher is doing anything differently but aren't the students "hearing" and making note of different things? Which course should be taught?
Do teachers at the fancy schools use MCMG exams? If so, does that mean the today's law schools are hiring people who are good at MCMG exams? If so, is that reflected in their teaching, testing and ultimately their evaluation of today's students?
What does it mean when someone defends MCMG by saying it produced a "great curve" or a "normal distribution." Does that mean the students were tested on the right things, whatever they are? I suppose you would get a normal distribution if you used a soft-ball throwing contest.
What does it mean when someone defends MCMG by saying the same students do well on both types of tests. What is the connection between that and what they are learning and teaching effectiveness?
Has anyone using MCMG exams actually studied how to write "good" multiple choice questions?
As a comment to my last post on this, Nancy Rappaport had some interesting views.
If you use MCMG exams, how do you perform the diagnostic element of teaching and testing? By that I mean the process of identifying individual and group weaknesses in reasoning and expression so you can adjust your teaching the next time around.
Having said all this and revealed my distrust if MCMG exams, I realize that some of the same questions could be asked about essay exams. What is the connection between good essay exam writing and a student's potential as an attorney, judge or law professor? I think I have a better chance of spotting the ones with great potential when they are forced to reveal themselves in an essay. But, that too has not been tested. In effect, our testing needs to be tested.
At one level what worries me the most is the thought that if essay exams could be graded even faster than MCMG exams, a fair number of law professors would switch back and then defend the new position as consistent with good teaching and evaluation.
Sunday, August 23, 2009
How Top-Ranked Law Schools Got That Way, Pt. 2
In the first post in this series, I discussed the mysterious distribution of maximum z-scores in the top two tiers of law schools in U.S. News & World Report's 2010 rankings, and focused on the top-12 schools to solve that mystery. In brief, among the very top schools, employment nine months after graduation" ("Emp9") varies too little to make much of a difference in the schools' overall scores, whereas overhead expenditures/student ("Over$/Stu") varies so greatly as to almost swamp the impact of the other factors that USN&WR uses in its rankings. Here, in part two, I focus on the top 22 law schools in USN&WR's 2010 rankings. In addition to the Emp9 and Over$/Stu effects observed earlier, this wider study uncovers some other interesting patterns.

The above graph, "Weighted & Itemized Z-Scores, 2010 Model, Top-22 Schools," offers a snapshot comparison of how a wide swath of the top schools performed in the most recent USN&WR rankings. It reveals that the same effects we observed earlier, among just the top-12 schools, reach at least another ten schools down in the rankings. With the exception of Emory and Georgetown, Emp9 scores (indicated by the dark blue band) barely change from one top-22 school to another. Over$/Stu scores, in contrast (indicated by the middle green hue), vary widely; compare Yale's extraordinary performance on that measure with, for instance, Boston University's.
This graph also reveals some other interesting effects. Like the Emp9 measure, the Emp0 measure (for "Employment at Graduation," indicated in yellow-green) varies little from school to school. Indeed, it varies even less than the Emp9 measure does. Why so? Because all of these top schools reported such high employment rates. All but Minnesota reported Emp0 rates above 90%, and all but Georgetown, USC, and Washington U. reported rates above 95%.
These top 22 schools also reported very similar LSATs. Their weighted z-scores for that measure, indicated here in light blue, range from only.20 to .15. The weighed z-scores for GPA, in contrast, marked in dark green, range from .24 to .06.
As the graph indicates, the measures worth 3% or less of a school's overall score—student/faculty ratio, acceptance rate, Bar exam pass rate, financial aid expenditures/student, and library volumes and equivalents—in general make very little difference in the ranking of these schools. One exception to that rule pops up in the BarPass scores (in dark orange) of the California schools, which benefit from a quirk in the way that USN&WR measures Bar Pass rates. Another interesting exception appears in Harvard's Lib score (in white)—only thanks to its vastly larger law library does Harvard edge out Stanford in this ranking.
To best understand how a few law schools made it to the top of USN&WR's rankings, we should contrast their performances with those of the many schools that did not do as well. I'll thus sample the statistics of the law schools that ranked 41-51 in the most recent USN&WR rankings, those that ranked 94-100, and the eight schools that filled out the bottom of the rankings. Please look for that in the next post.
[Crossposted at Agoraphilia, MoneyLaw.]
The above graph, "Weighted & Itemized Z-Scores, 2010 Model, Top-22 Schools," offers a snapshot comparison of how a wide swath of the top schools performed in the most recent USN&WR rankings. It reveals that the same effects we observed earlier, among just the top-12 schools, reach at least another ten schools down in the rankings. With the exception of Emory and Georgetown, Emp9 scores (indicated by the dark blue band) barely change from one top-22 school to another. Over$/Stu scores, in contrast (indicated by the middle green hue), vary widely; compare Yale's extraordinary performance on that measure with, for instance, Boston University's.
This graph also reveals some other interesting effects. Like the Emp9 measure, the Emp0 measure (for "Employment at Graduation," indicated in yellow-green) varies little from school to school. Indeed, it varies even less than the Emp9 measure does. Why so? Because all of these top schools reported such high employment rates. All but Minnesota reported Emp0 rates above 90%, and all but Georgetown, USC, and Washington U. reported rates above 95%.
These top 22 schools also reported very similar LSATs. Their weighted z-scores for that measure, indicated here in light blue, range from only.20 to .15. The weighed z-scores for GPA, in contrast, marked in dark green, range from .24 to .06.
As the graph indicates, the measures worth 3% or less of a school's overall score—student/faculty ratio, acceptance rate, Bar exam pass rate, financial aid expenditures/student, and library volumes and equivalents—in general make very little difference in the ranking of these schools. One exception to that rule pops up in the BarPass scores (in dark orange) of the California schools, which benefit from a quirk in the way that USN&WR measures Bar Pass rates. Another interesting exception appears in Harvard's Lib score (in white)—only thanks to its vastly larger law library does Harvard edge out Stanford in this ranking.
To best understand how a few law schools made it to the top of USN&WR's rankings, we should contrast their performances with those of the many schools that did not do as well. I'll thus sample the statistics of the law schools that ranked 41-51 in the most recent USN&WR rankings, those that ranked 94-100, and the eight schools that filled out the bottom of the rankings. Please look for that in the next post.
[Crossposted at Agoraphilia, MoneyLaw.]
Thursday, August 20, 2009
How Top-Ranked Law Schools Got That Way, Pt. 1
How do law schools make it to the top of the U.S. News & World Report rankings? USN&WR ranks law schools based on 12 factors, each of which counts for a certain percentage of a school's total score. Peer Reputation counts for 25% of each law school's overall score, for instance, whereas Bar Passage Rate counts for only 2%. More precisely, USN&WR calculates z-scores (dimensionless statistical measures of relative performance) for each of the 12 factors for each school, multiplies those z-scores by various percentages, and sums each school's weighted, itemized z-scores to generate an overall score the school. USN&WR then rescales the scores to run from 100 to zero and ranks law schools accordingly.
In earlier posts I described my model of the most recent U.S. News & World Report law school rankings (the "2010 Rankings"), quantified its accuracy, and published itemized z-scores for the top two tiers of schools. (Separately, I also suggested some reforms that might improve the rankings.) Studying those z-scores reveals a great deal about how the top-ranked law schools got that way. The lessons hardly jump out from the table of numbers, though, so allow me to here offer some illustrative graphs.

The above graph, "Weighted & Itemized Z-Scores of Top 100 Law Schools in Model of 2010 USN&WR Rankings," reveals an interesting phenomenon. The items on the left of the graph count for more of each school's overall score, whereas the items on right count for less. We would thus expect the line tracing the maximum weighted z-scores for each item to drop from a high, at PeerRep (a measure of a school's reputation, worth 25% of its overall score), to a low, at Lib (a measure of library volumes and equivalents, worth only .75%). Instead, however, the maximum line droops at Emp9 (employment nine months after graduation) and soars at Over$/Stu (overhead expenditures per student). The next graph helps to explain that mystery.

The above graph, "Weighted & Itemized Z-Scores, 2010 Model, Top-12 Schools," reveals two notable phenomena. First, the Emp9 z-scores, despite potentially counting for 14% of each school's overall score, lie so close together that they do little to distinguish one school from another. In practice, then, the Emp9 factor does not really affect 14% of these law schools' overall scores in the USN&WR rankings. (Much the same holds true of top schools outside of these 12, too.)
Second, the Over$/Stu z-scores range quite widely, with Yale having more than double the score of all but two schools, Harvard and Stanford, which themselves manage less than two-thirds Yale's Over$/Stu score. That wide spread gives the Over$/Stu score an especially powerful influence on Yale's overall score, making it almost as important as Yale's PeerRep score and much more important than any of the school's remaining 10 z-scores. In effect, Yale's extraordinary expenditures per student buy it a tenured slot at number one. (I observed a similar effect in last year's rankings.)
Other interesting patterns appear in "Weighted & Itemized Z-Scores, 2010 Model, Top-12 Schools." Note, for instance, that Virginia manages to remain in the top-12 despite an unusually low Over$/Stu score. The school's strong performance in other areas makes up the difference. Though it is not easy to discern from the graph, Virginia's reputation and GPA scores fall in the middle of these top-12 schools' scores. Northwestern offers something of a mirror image on that count, as it remains close to the bottom of the top-12 despite a disproportionately strong Over$/Stu score. The school's comparatively low PeerRep and BarRep scores (the lowest of those in the top-12) and GPA (nearly tied for the lowest) score pull it down; Northwestern's Over$/Stu score saves it.
[Since I find I'm running on a bit, I'll offer some other graphs and commentary in a later post or posts.]
[Crossposted at Agoraphilia, MoneyLaw.]
In earlier posts I described my model of the most recent U.S. News & World Report law school rankings (the "2010 Rankings"), quantified its accuracy, and published itemized z-scores for the top two tiers of schools. (Separately, I also suggested some reforms that might improve the rankings.) Studying those z-scores reveals a great deal about how the top-ranked law schools got that way. The lessons hardly jump out from the table of numbers, though, so allow me to here offer some illustrative graphs.
The above graph, "Weighted & Itemized Z-Scores of Top 100 Law Schools in Model of 2010 USN&WR Rankings," reveals an interesting phenomenon. The items on the left of the graph count for more of each school's overall score, whereas the items on right count for less. We would thus expect the line tracing the maximum weighted z-scores for each item to drop from a high, at PeerRep (a measure of a school's reputation, worth 25% of its overall score), to a low, at Lib (a measure of library volumes and equivalents, worth only .75%). Instead, however, the maximum line droops at Emp9 (employment nine months after graduation) and soars at Over$/Stu (overhead expenditures per student). The next graph helps to explain that mystery.
The above graph, "Weighted & Itemized Z-Scores, 2010 Model, Top-12 Schools," reveals two notable phenomena. First, the Emp9 z-scores, despite potentially counting for 14% of each school's overall score, lie so close together that they do little to distinguish one school from another. In practice, then, the Emp9 factor does not really affect 14% of these law schools' overall scores in the USN&WR rankings. (Much the same holds true of top schools outside of these 12, too.)
Second, the Over$/Stu z-scores range quite widely, with Yale having more than double the score of all but two schools, Harvard and Stanford, which themselves manage less than two-thirds Yale's Over$/Stu score. That wide spread gives the Over$/Stu score an especially powerful influence on Yale's overall score, making it almost as important as Yale's PeerRep score and much more important than any of the school's remaining 10 z-scores. In effect, Yale's extraordinary expenditures per student buy it a tenured slot at number one. (I observed a similar effect in last year's rankings.)
Other interesting patterns appear in "Weighted & Itemized Z-Scores, 2010 Model, Top-12 Schools." Note, for instance, that Virginia manages to remain in the top-12 despite an unusually low Over$/Stu score. The school's strong performance in other areas makes up the difference. Though it is not easy to discern from the graph, Virginia's reputation and GPA scores fall in the middle of these top-12 schools' scores. Northwestern offers something of a mirror image on that count, as it remains close to the bottom of the top-12 despite a disproportionately strong Over$/Stu score. The school's comparatively low PeerRep and BarRep scores (the lowest of those in the top-12) and GPA (nearly tied for the lowest) score pull it down; Northwestern's Over$/Stu score saves it.
[Since I find I'm running on a bit, I'll offer some other graphs and commentary in a later post or posts.]
[Crossposted at Agoraphilia, MoneyLaw.]
Monday, August 17, 2009
Transfer Fees
"A third myth is that clubs cannot buy success. They can, so long as they spend on players’ wages rather than on transfers. Almost 90% of the variation in the positions of leading English teams is explained by wage bills. Transfer fees contribute little. New managers hoping to make their mark often waste money. Stars of recent World Cups or European championships are overrated. So are older players. So, curiously, are Brazilians and blonds."
I guess the best example of this in baseball is the Red Sox and Dice-K. But I wondered if there are transfer fees in law teaching and could the same phenomena be at work. I could only think of one transfer and and one that is like a transfer fee.
At my School, if you take a sabbatical you must come back for at least a year. If not, as I understand it, either the person leaving or, more likely the destination school must provide compensation. To me that is very similar to a transfer fee but certainly not of the magnitude of those you read about in soccer.
Another practice that has the same effect is the treatment of a trailing spouse. The trailing spouse matter usually involves privileged people who have come to believe that, unlike the lower classes, they should not be put to life's hard choices. At my University for a time (and maybe even now) there was a plan. If one department wanted to hire a person who had a trailing spouse, that department would pitch in 1/3 of the trailer's salary. The department hiring the trailer would pay 1/3 and the central administration would pay 1/3.
So, suppose a department found a good candidate and offered $100,000. The the trailing spouse matter is then raised and plan is put into action. The trailer's salary will be $90,ooo. Listing it as the trailer's salary is a nice way to let the trailer save face but in every reality, the new faculty member is being paid at least $130,000, not $100,000.
Is this a tranfer fee? Obviously it is not because ultimately it becomes, indirectly, part of the wage of the new hire. On the other hand, the first department had a budget to spend on the "player" of $100,000. If it had known that it really had a budget of $130,000 it could have shopped at a different and more productive level. Put differently, if the school had considered what it was actually paying for its new hire, it could have hired someone better. As with the transfer fee, for the total amount paid, a better decision could be made.
Are there other academic hiring transfer fees? Not sure.
Subscribe to:
Comments (Atom)