Do law professors generally think most other law schools are pretty awful?

The U.S. News & World Report ("USNWR") law school rankings include a number of illuminating bits of information and some weaknesses, as I displayed yesterday. But a cursory look at Paul Caron's display of the peer reputation scores displays, perhaps, a startling truth: law professors generally think most other law schools are pretty awful. (I qualify that with "other" because I think most law professors generally think their own schools are probably pretty good.)

Law professors at each school--about 800 in total--are given the peer reputation survey. This is a paper ballot mailed to a number of faculty. Those surveyed include the dean, the academic dean, the faculty appointments chair, and the most recently tenured faculty member. The response rate tends to be fairly high, and understandably so--this survey accounts for 25% of the total USNWR ranking.

The survey asks faculty to rate schools on a scale of 5 (outstanding) to 1 (marginal). At times, other clarifications for these numbers are offered, a 3 being "good," or a 2 being "adequate." (And "adequate" is widely regarded as a fairly poor and back-handed remark.)

One might expect to see a fairly ordinary distribution between 5 and 1, perhaps a bell curve with a bulk of schools in the range of 3 in the middle. But it turns out law professors think little of other schools.

Just 47 schools exceed the middling score of 3.0. Nearly 80 schools score a 2.0 or below. The median score is a dismal 2.3. And over the years, law professors' peer scores have slightly declined on the whole--meaning they think schools are getting worse.

The visualization of the distribution rather vividly displays this point.

Now, perhaps my asking-a-question-as-a-headline is all clickbait [ed.: on my ad-free blog!], and I'm burying the lede--that is, the alternative factors contributing to these results. (But, it remains quite possible that law professors actually do believe that most schools are quite poor.)

First, the USNWR survey itself may be flawed. It may be gamed (see below), but also because the survey asks fairly generic overall question about the school's quality and offers fairly generic categories for ranking. It's hard to know whether professors are judging schools based on scholarly output, graduate outcomes, or, perhaps, simply echoing last year's USNWR rankings.

Second, law professors may be gaming the rankings. They very well know that giving a 5 to a school increases that school's score--and increases that school's chance that it surpass one's home institution in the rankings. That creates a pressure for ratings deflation. Further, a large number of 4s can be offset by a much smaller number of 1s.

Third, law professors may be expressing their ignorance of schools. If they're not aware of a school's quality, or if they are only marginally aware, they may simply default to a "1" and drive down a school's ranking. Even though the survey expressly permits professors to refuse to rank a school if they lack sufficient information, the temptation to rate a school (particularly for gaming purposes) may simply be too great.

Furthermore, these concerns may be overblown anyway! Even if the peer ratings are artificially low, they still highly correlate with ranked choice preferences of law school surveys conducted by Brian Leiter.

It's probably best, then, to conclude that the peer reputation scores are to be taken, to borrow a phrase, seriously but not literally. They're best understood as relative preferences of schools, not absolute ratings of school quality.

Taken that way, it demonstrates that the opinion of most law professors is that most law schools are clumped together. 27 schools have a score between 3.1 and 3.5, followed by an obvious gap of just 5 schools with a score between 2.8 and 3.0. 59 schools have a peer reputation score between 2.2 and 2.7; in an overlapping set, 63 schools have a peer reputation score between 1.9 and 2.4.

Perhaps there's a better way for USNWR to conduct the survey, or to report the results, to alleviate some of the problems. (Not that it would change its methodology if such an alternative were available--e.g., a digital ballot with ranked-choice voting.) But without that, it's worth thinking about how to best construe these survey results. And it's probably best not to think of the survey in absolute terms, but in relative terms--a few elite schools, a handful of good schools, and significant clumps of other schools.