Law school faculty have aggressively and successfully lobbied to diminish the importance of law school faculty in the USNWR rankings

In many contexts, there is a concern of “regulatory capture,” the notion that the regulated industry will lobby the regulator and ensure that the regulator sets forth rules most beneficial to the interests of the regulated industry.

In the context of the USNWR law rankings, the exact opposite has happened when it comes to the interests of law school faculty. Whether it has been intentional or inadvertent it hard to say.

It is in the self-interest of law school faculty to ensure that the USNWR law school rankings maximize the importance and influence of law school faculty. The more that faculty matter in the rankings, the better life is for law faculty—higher compensation, more competition for faculty, more hiring, more recognition for work, more earmarking for fundraising, the list goes on.

But in the last few years, law school faculty (sometimes administrators, sometimes not) have pressed for three specific rules that affirmatively diminish the importance of law faculty in the rankings.

First, citation metrics. USNWR suggested in 2019 that it would consider incorporating law school faculty citation metrics into the USNWR law school rankings. There were modest benefits to this proposal, as I pointed out back in 2019. Citation metrics are less “sticky” than peer reputations and may better capture the “influence” or quality of a law faculty.

But the backlash was fierce. Law faculty complained loudly that the citation metrics may not capture everything, may capture it imperfectly, may introduce new biases into the rankings, may create perverse incentives for citations—the list went on and on. USNWR abandoned the plan.

Note, of course, that even an imperfect metric was specifically and crucially tied to law school faculty generally, and law school scholarly productivity particularly. Imperfect as it may have been, it would have specifically entrenched law school faculty interests in the rankings. But law school faculty spoke out sharply against it. It appears that backlash—at least in part—helped drive the decisionmaking about whether it should be used.

Second, expenditures per student. Long a problem and a point of criticism were the expenditures per student. A whopping 9% of the old USNWR measured “direct” expenditures (e.g., not scholarships). That includes law professors’ salaries. The more you spent, the higher you could rise in the rankings.

Expenditures per student was one of the first things identified by “boycotting” schools last fall as a problematic category. And they have a point! The data was not transparent and subject to manipulation. It did not really have a bearing on the “quality” of the student experience (e.g., public schools spent less).

But as I pointed out earlier this year, knocking out expenditures per student kills the faculty’s golden goose. As I wrote:

In the past, law schools could advocate for more money by pointing to this metric. “Spend more money on us, and we rise in the rankings.” Direct expenditures per student—including law professor salaries—were 9% of the overall rankings in the most recent formula. They were also one of the biggest sources of disparities among schools, which also meant that increases in spending could have higher benefits than increases in other categories. It was a source for naming gifts, for endowment outlays, for capital campaigns. It was a way of securing more spending than other units at the university.

. . .

To go to a central university administration now and say, “We need more money,” the answer to the “why” just became much more complicated. The easy answer was, “Well, we need it for the rankings, because you want us to be a schools rated in the top X of the USNWR rankings.” That’s gone now. Or, at the very least, diminished significantly, and the case can only be made, at best, indirectly.

The conversation will look more like, “Well, if you’re valued on bar passage and employment, what are you doing about those?

Again, law faculty led the charge to abolish the expenditures per student metric—that is, chopping the metric that suggested high faculty salaries were both good and should contribute to the rankings.

Third, peer score. Citation metrics, I think, would have been a way to remedy some of the maladies of the peer scores. Peer scores are notoriously sticky and non-responsive to current events. Many law schools with “high” peer scores have them because of some fond recollections of the faculty circa 1998. Others have “low” peer scores because of a lack of awareness of who’s been writing what on the faculty. Other biases may well abound in the peer score.

The peer scores were volatile in limited circumstances. Renaming a law school could result in a huge bounce. A scandal could result in a huge fall—and persist for years.

But at 25% of the rankings, they mattered a lot. And as they were based on survey data from law school deans and faculty, your reputation within the legal academy mattered a lot. And again, I think the bulk of the way faculty valued other law school was mostly their faculty. Yes, I suppose large naming gifts, reports of high bar passage and employment, or other halo effects around the school (including athletics) could contribute. But I think the reputation of law schools among other law schools was often based on the view of the faculty.

Private conversations with USNWR from law faculty and deans over the years, however, have focused criticism on the peer score. Law faculty can’t possibly know what’s happening at 200 schools (but survey respondents have the option of not voting if they don’t know enough). There are too many biases. It’s too sticky. Other metrics are more valuable. My school is underrated. On and on.

Fair enough, USNWR answered. Peer score will be reduced from 25% to 12.5%. The lawyer-judge score will be reduced from 15% to 12.5%—and now equal with peer score.

To start, I doubt lawyers and judges know as much about the “reputation” of law schools. Perhaps they are more inclined to leave more blanks. But the practice of law is often a very regional practice, and one could go a long time without ever encountering a lawyer from any number of law schools. And many judges may have no idea where the litigants in front of them went to law school In contrast, law school faculty and deans know a lot about what’s happening at other law school—giving faculty workshops and talks, interviewing, lateraling, visiting, attending conferences and symposia.

But setting that aside, law faculty were successful. They successfully pressed to diminish the peer score, which was a mechanism for evaluating the quality of a law school, often based on the quality of faculty. Back to the golden goose, as I noted earlier:

And indirectly, the 40% of the formula for reputation surveys, including 25% for peer surveys and 15% for lawyer/judge, was a tremendous part of the formula, too. Schools could point to this factor to say, “We need a great faculty with a public and national reputation, let us hire more people or pay more to retain them.” Yes, it was more indirect about whether this was a “value” proposition, but law faculty rating other law faculty may well have tended to be most inclined to vote for, well, the faculty they thought were best.

Now, the expenditure data is gone, completely. And peer surveys will be diminished to some degree, a degree only known in March.

*

Maybe this was the right call. Certainly for expenditure data, I think it was a morally defensible—even laudable—outcome. For the citation data and the peer score, I am much less persuaded that opposition was the right thing or a good thing. There are ways of addressing the weaknesses in these areas without calling for a reduction in weight or impact, which, I think, would have been preferable.

But instead, I want to make this point. One could argue that law school faculty are entirely self-interested and self-motivated to do whatever possible to ensure that they, as faculty, will receive as much security, compensation, and accolades as possible. Entrenching those interests in highly-influential law school rankings would have been a way to do so.

Yet in three separate cases, law faculty aggressive lobbied against their own self interest. Maybe that’s because they viewed it as the right thing to do in a truly altruistic sense. Maybe because they wanted to break any reliance on USNWR or make it easier to delegitimize them. Maybe it was a failure to consider the consequences of their actions. Maybe my projections about the effect that these criteria have on faculty are simply not significant. I’m not sure.

In the end, however, we have a very different world from where we might have been five years ago. Five years ago, we might have been in a place where faculty publications and citations were directly rewarded in influential law school rankings; where expenditures on faculty compensation remained rewarded in those rankings; and where how other faculty viewed you was highly regarded in those rankings. None of that is true today. And it’s a big change in a short time.