The 2024-2025 USNWR law school rankings: methodology tweaks may help entrench elite schools, but elite schools see reputation decline among lawyers and judges
Hours after the release of last year’s dramatic change to the USNWR methodology, I noted the dramatic increase in “compression and volatility” in the coming rankings.
USNWR changed a couple of things in its methodology:
There were a couple differences in how the rankings were calculated, described below. In summary, U.S. News averaged its bar passage and employment indicators over two years. Also, the lawyers and judges assessment score had a second source of ratings besides names supplied by law schools.
While it might not be the design—more on that in a moment—its effect may well be to entrench elite schools.
1. Changes to employment (and bar passage)
USNWR decided to use two-year figures for both employment and bar passage. Here’s how it explained the employment changes.
To improve measurement of this indicator – given the common year-to-year fluctuations associated with outcome measures and the small sizes of some graduating J.D. classes – this indicator was derived from the average of the 2021 and 2022 graduating class outcomes 10 months after graduation.
This isn’t entirely true for several reasons. First, the problem of “small sizes” of classes is not the issue—and it’s an issue that’s been true for the decades that USNWR used the categories, but it never thought to include a two-year average until now. And there have always been fluctuations, again, in the decades that USNWR has used these metrics.
The issue, instead, is a problem about compression in their rankings system with the new methodology and high volatility in the categories given the most weight.
Compare this visualization of schools two years ago to last year, and where the raw schools put the top ~60 schools.
The methodology changes removed or reduced the weight of categories that created a broader spread across schools. That created the compression. Then it gave additional weight to the categories that are the most volatile. That would lead to this year’s projected more dramatic changes among schools—not just volatility, but volatility within a highly compressed rating system.
So why did USNWR decide to change this year? There are two possible explanations for this change, and, tellingly, either explanation looks bad for USNWR.
One explanation is that USNWR was simply unaware of the potential volatility in their ranking sand is responding now. That is a bad look for USNWR. It took me minutes to spot this likely problem. If it escaped their entire data team’s months-long vetting, it’s a telling concession.
The other explanation is that USNWR was aware of the potential volatility, but it took a step this year to react to reduce it. That’s a bad look, too—if it was aware of the problem, why didn’t it address the problem then? It did, after all, have all of the granular employment data in previous years. And if it was aware last year, what prompted the change this year?
The related answer to both, by the way, is that it saw something problematic in what the outputs would be, and it modified the weighing to avoid undesirable results. This is not something I have proof for, I admit. I can only infer from the actions take in response to some events of the last year.
But we saw a few schools—notably, as I pointed out, NYU and Cornell—that would disproportionately suffer under the new system. I projected NYU to slide to 11 and Cornell to 18. Instead, with a re-weighing, NYU slid only to 9, and Cornell to 14. Other schools—particularly Washington University in St. Louis, North Carolina, and Texas A&M—were projected to rise much faster. The rankings changes are designed to put a governor on moves down—or up—the rankings.
Now, it’s not possible to prove that USNWR saw that NYU and Cornell would slide much faster than they thought appropriate and changed the methodology. But I can simply point out that these arguments were raised publicly for months, and this methodological change is designed to slow down the kinds of dramatic changes that we publicly expected this year. It’s not a good look whatever the motivation was, because it reflects a lack of competence about the changes instituted last year. Relatedly, USNWR is here conceding that too much volatility is a bad thing. That is, it would prefer to see less movement (and more entrenchment) in its final product.
(The lengthening window of data is creating increasingly strange results. For instance, today’s prospective law students are considering what their employment and outcome prospects look like in 2027. The current methodology has data stretching back to the Class of 2019 (two-year average of ultimate bar passage rates for the Classes of 2019 and 2020). That said, perhaps it’s better to think of schools over a longer period of time rather than just one-year data sets each year.)
2. Added value of career development (and bar support) at law schools
Last December, I blogged, “Perhaps the most valuable legal education job in the new USNWR rankings landscape? Career development.” It that was true then, it’s essentially doubly true now.
When I looked at the dramatic opportunity for law schools to rethink how they do admissions, I highlighted how broad the spread was for employment outcomes, and how small fluctuations could effect a school’s place dramatically. (See earlier for NYU and Cornell.) The same was true, to a lesser extent, on bar passage.
Now, in pure mathematical terms, the effect of a given class’s employment output is unchanged. It was 33% of the rankings last year; it’s now (effectively) 16.5% of the rankings for each year of two years. Formally, no difference.
But, I would posit, I think employment effects have now effectively doubled.
A good year will redound to a school for two years; a bad year will need to be managed across two years. No more opportunities to rip the bandage off and move to the next year; a bad year will linger. And yes, while it receives less weight in a given year, a school is seeking to maximize the effect each time every year.
So what I said before, about career development being the most valuable job in legal education? Doubly true.
The legal profession is witnessing a slowdown in hiring. Tougher times are coming to graduating law classes in the very near future. And you don’t want to be preparing for the storm in the middle of it. Law schools should be in the process of adding to their career development offices—in fact, I’d say, as a rule of thumb, doubling the size. And if you’re not… well, I hate to use the term “academic malpractice” without an individualized assessment, but it’s the term I’m likely to use anyway. And while that may sound like overkill, recall that this isn’t simply a USNWR gimmick. It benefits students to have high quality career advising and mentoring for their future professional careers, particularly as economic challenges arise in the near future.
(The same is true for bar passage, but at many schools, I think, the value will largely be in ensuring that students get over the finish line at the end of the day if they fail the bar exam on the first attempt. The state-specific relative metric of the bar exam makes it tougher to quantify here. So the same is true, I think, just to a smaller degree, of bar support more generally.)
3. Changes to lawyer and judge peer reputation surveys.
One more methodological change of note:
Legal professionals – including hiring partners of law firms, practicing attorneys and judges – rated programs' overall quality on a scale from 1 (marginal) to 5 (outstanding), and were instructed to mark "don't know" for schools they did not know well enough to evaluate. A school's score is the average of 1-5 ratings it received across the three most recent survey years. U.S. News administered the legal professionals survey in fall 2023 and early 2024 to recipients that law schools provided to U.S. News in summer 2023. Of those recipients surveyed in fall 2022 and early 2023, 43% responded. For this edition, U.S. News complemented these ratings by surveying partners at big law firms, sampled based on their size – larger firms were more frequently surveyed – while establishing geographic dispersion. Leopard Solutions, which partnered with U.S. News on its Best Companies to Work For: Law Firms list, provided U.S. News with the contacts from which a sample was drawn.
USNWR recognized that as schools “boycotted” the survey, they would have a smaller universe of lawyers and judges to survey. In the past, schools submitted 10 names (up to ~2000 names). The response rate was quite low, so USNWR used a three-year average. As schools stop submitting names, USNWR looked elsewhere.
And it deliberately selected a category: “partners at big law firms, sampled based on their size—larger firms were more frequently surveyed.”
In “Where Do Partners Come From?,” Professor Ted Seto tracked where NLJ 100 law firm partners came from—partners at the largest law firms. The data is from 2012, but we know that partnership in large law firms is also not susceptible to significant fluctuations. Here’s the top 20, with the raw number of partners listed:
1 Harvard 946
2 Georgetown 729
3 NYU 543
4 Virginia 527
5 Columbia 516
6 George Washington 447
7 Michigan 444
8 Chicago 426
9 Texas 384
10 Northwestern 365
11 Pennsylvania 329
12 Boston University 317
13 Fordham 306
14 UC Berkeley 287
15 UCLA 257
16 Yale 253
17 Stanford 240
18 UC Hastings 233
19 Duke 219
20 Boston College 213
These 20 schools are nearly all in the “top 20” or just outside of it in the USNWR rankings, and the handful that fall outside (e.g., George Washington, UC Law SF formerly Hastings) have, at varying times, been closer to the “top 20”. We can expect some affinity (or bias) for these partners’ home institutions, and perhaps for “peer” institutions as well (e.g., where their fellow partners at their firms attended school).
With almost clinical precision, then, USNWR has opted for a category to “complement” the survey that is likely to benefit the most elite law schools.
So, did it work? Well, to be fair, perhaps my assumption is wrong.
It’s worth noting that 11 of the “top 14” schools are experiencing all-time lows, either new lows or lows that tie previous lows, since USNWR began this metric in 1998 in the lawyer and judge survey category. Here’s the score in this category (on a 1-5 scale), with the comparison of the all time high for each school.
Stanford: 4.7 (all-time high: 4.9)
Harvard: 4.6 (4.9)
Chicago: 4.6 (4.8)
Columbia: 4.5 (4.8)
Yale: 4.5 (4.9)
Michigan: 4.4 (4.7)
Virginia: 4.4 (4.6)
Duke: 4.3 (4.5)
NYU: 4.3 (4.6)
Berkeley: 4.3 (4.6)
Georgetown: 4.2 (4.5)
Penn saw a decline from 4.4 to 4.3, and Cornell saw a decline from 4.3 to 4.2, but neither was an all-time low. Only Northwestern saw its score stable at 4.3 and not experience an all-time low. UPDATE: I mistakenly had Virginia’s previous high at 4.5 instead of 4.6.
Compare that to the next 86 schools in this category that have been ranked since 1998, and just 6 others experienced all-time low, again either new lows or lows that tie previous lows.
What could cause this disparity? Causation is tough to identify here, but let me posit two things.
First, we are seeing the slow phase-out of “boycotting” schools’ data inputs. Now two-thirds of the schools’ data is out of the mix. Schools were inclined to include their own supporters, and they are gone. Now, it’s hard to say that this is happening with such clinical precision at only the elite law schools and nowhere else. But perhaps a lot of elite law schools boycotted, and there’s some tag along effects as elite schools tend to rate elite schools comparably. That said, “complementing” with big law partner data should help shore up these figures, but perhaps there’s not enough here (indeed, we have no idea how they are mixed in with the data).
Second, it is possible that lawyers and judges—and perhaps in particular big law firm partners—are generally viewing elite law schools with less and less respect than at any time in recent history, and perhaps more so in the last year than at any time else. It might be law student or university protests about the Gaza conflict, fossil fuels, free speech—pick a cause. And perhaps the brunt of that publicity (and perhaps actual events) is falling on the most elite schools, which is creating fallout to their reputations in the legal community more generally. But that is very hard to assume and to pinpoint, and one might want to see what happens next year.
Neither is a perfect causal explanation, but both offer some possibilities to consider. Now, again, I would have expected the new methodology to help entrench elite schools, but this year it seems not to have done so.
We shall see what happens next year. Will three-year averages of some categories be in store? Or will USNWR introduce other categories (e.g., if big law firm partners merit special surveys, shouldn’t such outcomes for employment merit special weight?) consistent with concerns that the methodology ought to value certain things more than it has in the past?
There’s much more to discuss, of course, but this is my first take on the methodological changes in particular and the noteworthy change of reputation scores among the legal profession among a cohort of law schools.