By knocking off expenditure metrics and devaluing peer reputation scores in the new USNWR formula, did law schools just kill the faculty's golden goose?

As Aesop tells it, there was a goose that laid golden eggs. The greedy farmer saw the goose and thought there must be more gold inside the goose. The farmer kills the goose and finds nothing special inside—but he has now lost the ability to gather any more golden eggs.

It may not be the same story with the USNWR boycott and subsequent rankings changes. Law schools may well have attacked the goose thinking it was a wolf. But upon its demise, it may well be that law schools have permanently lost one of their most significant bargaining chips with central universities in trying to secure more funding for the law school.

Let me at the outset point out that I’ve long been critical of many aspects of the USNWR rankings, including expenditure data. It’s been opaque and been a kind of arms race for schools to figure out which accounting tricks they can use to raise their expenditure figures. And let me add that eliminating is in many respects a good thing, because the costs often fell on student loan borrowers and tuition hikes. So the analysis below is a small violin for many, indeed!

But a sober look at the change is in order. I posited yesterday about a potential effect of eliminating the expenditures-per-student metric:

By the way, it’s worth considering a new and different incentive for law schools situated within universities right now. Law schools could presently make the case to central administration that high spending on resources, including on law professor salaries, was essential to keeping one’s place in the rankings. No longer. It’s worth considering what financial incentive this may have on university budgets in the years ahead, and the allocation of resources.

From some offline and private conversations, this factor has been one of the most eye-opening to the law professoriate.

In the past, law schools could advocate for more money by pointing to this metric. “Spend more money on us, and we rise in the rankings.” Direct expenditures per student—including law professor salaries—were 9% of the overall rankings in the most recent formula. They were also one of the biggest sources of disparities among schools, which also meant that increases in spending could have higher benefits than increases in other categories. It was a source for naming gifts, for endowment outlays, for capital campaigns. It was a way of securing more spending than other units at the university.

And indirectly, the 40% of the formula for reputation surveys, including 25% for peer surveys and 15% for lawyer/judge, was a tremendous part of the formula, too. Schools could point to this factor to say, “We need a great faculty with a public and national reputation, let us hire more people or pay more to retain them.” Yes, it was more indirect about whether this was a “value” proposition, but law faculty rating other law faculty may well have tended to be most inclined to vote for, well, the faculty they thought were best.

Now, the expenditure data is gone, completely. And peer surveys will be diminished to some degree, a degree only known in March.

Some increase in the measurement of outputs, including bar passage data and employment outcomes, will replace it.

For law faculty specifically, and for law schools generally, this is a fairly dramatic turn of events.

To go to a central university administration now and say, “We need more money,” the answer to the “why” just became much more complicated. The easy answer was, “Well, we need it for the rankings, because you want us to be a schools rated in the top X of the USNWR rankings.” That’s gone now. Or, at the very least, diminished significantly, and the case can only be made, at best, indirectly.

The conversation will look more like, “Well, if you’re valued on bar passage and employment, what are you doing about those?

A couple of years ago, I had these long thoughts on the hollowness of law school rankings. For schools that lack the confidence in their institution and lack the vision to be able to articulate the value of the institution without reference to rankings, rankings provided easy external validation. They have also provided easy justification for these kinds of asks over the years.

Those easy days are over. Funding requests will need to look very different in a very short period of time.

Are there are other things that law schools can point to for a specific investment in law faculty in the USNWR rankings? Well, one such measure may have been citation metrics, which I had some tentative but potentially positive things to say as USNWR considered those. But law schools mounted a pressure campaign to nix that idea, too.

At the end of the day, then, the rankings formula will have very little to say with anything about the quality of law school faculty or the school’s financial investment in its faculty. An indirect case, of course, including a diminished peer reputation score. And faculty do contribute to bar passage and to employment outcomes. There will still be a faculty-student ratio.

But I think the financial case for law schools may well look very different in the very near future. This will be almost impossible to measure, and the anecdotes coming from these changes may well be wild and unpredictable. It’s also contingent on other USNWR changes, of course. But it’s a question I’ll be trying to watch closely over the next decade.