The absurd volatility of USNWR specialty law rankings

Last year, USNWR dramatically expanded the scope of its specialty law rankings. It went from a handful of schools ranked in categories like clinical training and health care law, to virtually all law schools. I noted last year how this might prompt a new arms race among law schools. This year, USNWR added four categories: “Business-Corporate Law,” “Contracts-Commercial Law,” “Criminal Law,” and “Constitutional Law.”

Each of the ~200 law schools is asked to list one faculty contact in each area. Those faculty are then given a survey and asked to rank schools on a scale of 1 to 5, or not answer if they don’t have enough information about schools. There is comical compression outside the top handful of schools in each category, which is why USNWR typically wouldn’t go deep into the rankings. Nevertheless, beginning last year, it started publishing rankings of all schools that received at least 10 responses.

Response rates for surveys range from around 45% to 60%, so these are basically surveys of about 100 law professors—professors identified by their deans or some administrator as being the relevant contact in the field.

To show how precarious the surveys are, many schools probably receive far fewer than 100 professors’ evaluations, and many are close to the 10-voter threshold.

I just picked the “Tax Law” category at random among areas surveyed in both 2018 (for the 2020 rankings) and 2019 (for the 2021 rankings).

Here are schools ranked in 2020 (with their peer score averages on a scale of 1-5) that were unranked in 2021:

Barry 1.0
St. Thomas (Florida) 1.2

And here are schools ranked in 2021 (with their peer score averages on a scale of 1-5) that were unranked in 2020:

Ave Maria 1.0
Belmont 1.1
Concordia 1.0*
Drake 1.3
Duquesne 1.5
Faulkner 1.0
Lincoln Memorial 1.0*
North Carolina Central 1.1
Northern Kentucky 1.1
Regent 1.1
Memphis 1.2
Western New England 1.1

(*Two schools were added to surveys after receiving ABA accreditation.)

It’s worth considering that these schools each year probably received barely more than 10 votes, even if the response rate was purportedly among more than 100 tax law professors. It’s questionable how worthwhile much of this survey can be among schools that receive an average below 2.0 if the response rates are, in reality, so pitifully low.

Another is volatility. With such a small sample, we should expect high degrees of volatility from year to year, which tells us nothing about actual changes in reputation. Consider a few:

South Carolina: 2.6 (+0.6)
Loyola Chicago: 2.7 (+0.4)
Albany: 1.8 (+0.4)
Cornell: 2.9 (+0.4)
Ohio State: 3.1 (+0.4)
Baltimore: 2.3 (+0.4)
Cincinnati: 2.4 (+0.4)
Richmond: 2.6 (+0.4)
Washington University: 3.3 (+0.4)
-
Suffolk: 1.5 (-0.4)
Wyoming: 1.5 (-0.4)
Vermont: 1.3 (-0.4)
New England Law: 1.2 (-0.4)
Montana: 2.0 (-0.5)

To give you some perspective, in the entire history of the USNWR peer reputation scores, there has been exactly one instance of a score rising by 0.4 (and never more); and exactly three instances of a score lowering by 0.4 (never more), all directly tied to scandals at the institution.

This reflects absurd year-over-year volatility in these “rankings.” It’s could be partly a case of a faculty member on that school’s faculty who did, or didn’t, fill out the survey in one year as opposed to the other (i.e., giving one’s home institution a “5” in one year and then not voting the other year).

I checked Brian Leiter’s report of lateral faculty moves last year, with only one reported tax move (from George Washington to Florida). That is, there’s nothing obvious to suggest that anything has materially changed at any of these institutions (although someone might point some detail out to me).

These comically bad surveys, however, will continue to receive outsized advertising from law schools and be given outsized weight from prospective law students. I simply highlight the absurdity here.