Modeling and projecting USNWR law school rankings under new methodologies

I mused earlier about the “endgame” for law schools “boycotting” the rankings. It’s apparent now that USNWR will not abandon the rankings, and it’s quite unclear (and I would guess doubtful) that these rankings with different metrics will be less influential to prospective law students or employers than the past. But the methodology will change. What might that mean for law schools?

I assume before they made a decision to boycott, law schools modeled some potential results from the boycott to determine what effect it may have on the rankings. We have greater clarity now than those schools did before the boycott, and we can model a little bit better some of the potential effects we’ll see in a couple of months. I developed some models to look at (and brace for) the potential upcoming landscape.

I’ve talked with plenty of people at other schools who are privately developing their own models. That’s great for them, but I wanted to give something public facing, and to plant a marker to see how right—or more likely, how wrong!—I am come spring. (And believe me, if I’m wrong, I’ll write about it!)

First, the criteria. USNWR disclosed that it would no longer use privately-collected data and instead rely exclusively on publicly-available data, with the exception of its reputational survey data. (You can see what Dean Paul Caron has aggregated on the topic for more.) It’s not clear whether USNWR will rely on public data other than the ABA data. It’s also not clear whether it will introduce new metrics. It’s given some indications that it will reduce the weight of the reputational survey data, and it will increase the weight of output metrics.

The next step is to “model” those results. (Model is really just a fancy word for educated guess.) I thought about several different ways of doing it before settling on these five (without knowing what the results of the models would entail).

  Current weight Model A Model B Model C Model D Model E
Peer score 0.25 0.225 0.225 0.225 0.2 0.15
Lawyer/judge score 0.15 0.125 0.125 0.125 0.1 0.1
UGPA 0.0875 0.09 0.09 0.09 0.1 0.1
LSAT 0.1125 0.12 0.13 0.16 0.17 0.15
Acceptance rate 0.01 0.02 0.03 0.02 0.03 0.03
First-time bar passage 0.03 0.05 0.1 0.07 0.05 0.12
10 month employment rate 0.14 0.3 0.25 0.23 0.25 0.3
Student/faculty ratio 0.02 0.04 0.04 0.03 0.05 0.05
Librarian ratio* 0.01 0.01 0.01 0 0 0
Ultimate bar passage 0 0.02 0 0.05 0.05 0
Other factors 0.19 0 0 0 0 0

Note that at least 19% of the old methodology is being cut out of the new methodology. Note, too, that there’s some diminished weight to, at least, the peer score and the lawyer/judge score. That means these categories have to be made up somewhere else. There’s only so much pie, and a lot of pieces simply have to get bigger. Despite the purported focus on outputs, I think some increased weight on inputs will be inevitable (absent significant new criteria being added).

I added a potential category of “ultimate bar passage” rate in three of the five models. It’s a possible output that USNWR may adopt, as it is based on publicly-available information and uses outputs, something USNWR has said it intends to rely upon more heavily.

I also added a “librarian ratio” in two of the five models. But it’s a different one from the existing librarian ratio. USNWR has indicated it will not use its internal library resources question (which was a kind of proprietary calculation of library resources), but it has not indicated that it would not use a student-faculty ratio equivalent for full time and part time librarians, so I created that factor in two of the five models.

If I had to guess, I would guess more minimal adjustments are most likely, highlighted more by Models A & B, but I think there is certainly the possibility for more significant changes, as I highlighted in Models C, D, & E.

Crucially, in all of them, the 10-month employment metric is significantly increased in all models, an assumption that may be wrong, but one that also increases some “responsiveness” (read: volatility) in the rankings, as highlighted below. (I also had to reverse-engineer the weights for the employment metric, which may be in error, and which could change beyond the changes USNWR has presently indicated.) This is one of the most uncertain categories (and the most likely I erred in these predictions), particularly given how much weight it receives in almost any new model. It is also likely going to be the most significant thing law schools can do to move their rankings year to year. If you are wondering how or why a school moved significantly, it is likely attributable to this factor. Getting every graduate into some kind of employment is crucial for success.

I used last year’s peer and lawyer/judge scores, given how similar they tend to be over the years, but with one wrinkle. On the peer scores, I reduced any publicly “boycotting” schools’ peer score by 0.1. I assume that the refusal to submit peer reputational surveys from the home institution (or, perhaps, the refusal of USNWR to count those surveys) puts the school at a mild disadvantage on this metric. I do not know that it means 0.1 less for every school (and there are other variables every year, of course). I just made it an assumption for the models (which of course may well be wrong!). Last year, 69% of survey recipients responded, so among ~500 respondents, the loss of around 1% of respondents, even if quite favorable to the responding institution, would typically not alter the survey average. But as more respondents remove themselves (at least 14% have suggested publicly they will, with others perhaps privately doing so), each respondent’s importance increases. It’s not clear how USNWR will handle the reduced response rate. This adds just enough volatility, in my judgment, to justify the small downgrade.

Next, I ran all the data from the schools, scaled them, weighed them, and ranked them. These are five different models, and they led to five different sets of rankings (unsurprisingly). I then chose the median ranking of each school among the five models. (So the median for any one school could be from any one of the models.)

Let me add one note. From these five different models, there was very little variance between them, despite some significant differences in the weighting. Why is that? Well, many of these items are highly correlated with one another, so adjusting the weights actually affects relatively little. The lower you go, however, the more compressed the rankings are, and the more volatile even small changes can be.

You’ll also note little change for most schools, perhaps no more than a typical year’s ups and downs. Unless the factors removed or given reduced weight worked as a group to favor or disfavor a school, we aren’t likely to see much change.

One last step was to offer a potential high-low range among the rankings. For each of the five models, I gave each school a rank one step up and one step down, to suggest some degree of uncertainty in how USNWR calculates, for instance, the 10-month employment positions, or diploma privilege admission for bar passage, among other things. That gave me 15 potential rankings—a low, projected, and high among each of the five models. I took the lowest of the low and the highest of the high for a projected range. With high degrees of uncertainty, this range is an important caveat.

Below are my projections. (Again, it’s apparent a lot of schools are doing this privately, so I’ll just do one publicly for all to see, share, and, of course, critique. I accidentally switched a couple of schools in a recent preview of this ranking, so I’ve tried to double check as often as I can to ensure the columns are accurate!) Schools should not be overly worried or joyful at these rankings (or any rankings), but they should manage expectations about how they might handle any changes with appropriate stakeholders in the months ahead. Because these are projected “median” rankings, it will not cleanly add up in a true rank order relative to one another (e.g., two schools are projected at 10, and one school is projected at 11).


School Median projected rank Projected range Current rank
Yale 1 1 3 1
Stanford 1 1 3 2
Chicago 3 1 4 3
Harvard 4 3 6 4
Penn 5 3 8 6
Columbia 6 4 8 4
NYU 6 4 8 7
Virginia 8 5 9 8
Berkeley 9 8 12 9
Duke 10 8 13 11
Northwestern 10 8 13 13
Michigan 11 9 14 10
Cornell 13 10 14 12
UCLA 14 12 15 15
Georgetown 15 14 17 14
Vanderbilt 16 14 18 17
Washington Univ. 17 15 19 16
Texas 18 16 20 17
USC 19 15 20 20
Minnesota 20 19 23 21
Florida 21 19 23 21
Boston Univ. 22 20 25 17
Georgia 22 19 25 29
North Carolina 24 21 27 23
Notre Dame 24 21 27 25
BYU 26 23 31 23
Emory 26 22 33 30
Ohio State 27 24 30 30
George Washington 28 25 34 25
Wake Forest 29 24 33 37
Arizona State 30 26 33 30
Boston College 31 26 34 37
Fordham 33 30 37 37
Irvine 33 30 38 37
Alabama 34 30 37 25
Iowa 36 33 42 28
George Mason 36 30 42 30
Texas A&M 37 30 42 46
Illinois 38 33 42 35
Washington & Lee 39 34 43 35
Utah 39 34 44 37
Wisconsin 42 37 51 43
William & Mary 43 39 47 30
Pepperdine 43 41 49 52
Villanova 43 38 47 56
Indiana-Bloomington 46 41 51 43
Florida State 46 42 53 47
Davis 48 44 58 37
Arizona 48 46 55 45
Maryland 48 44 53 47
Washington 48 44 53 49
SMU 48 43 53 58
Baylor 53 46 56 58
Kansas 53 46 58 67
Colorado 55 47 61 49
Cardozo 55 48 61 52
Temple 56 53 58 63
UCSF (Hastings) 58 56 72 51
Richmond 58 55 63 52
Wayne State 58 51 64 58
Tulane 60 56 69 55
Tennessee 61 54 66 56
Oklahoma 61 58 67 88
Loyola-Los Angeles 62 58 69 67
Houston 64 60 69 58
Miami 66 61 69 73
South Carolina 66 61 72 84
San Diego 68 64 74 64
Northeastern 68 61 74 73
Seton Hall 68 61 77 73
Connecticut 69 66 78 64
Florida International 69 61 80 98
Missouri 73 67 78 67
Drexel 73 62 83 78
Georgia State 73 66 80 78
St. John's 73 68 83 84
Oregon 76 68 83 67
Case Western 76 68 85 78
Penn State Law 77 72 85 64
Kentucky 77 68 85 67
American 77 72 87 73
Denver 80 72 85 78
Marquette 82 72 85 105
Texas Tech 84 74 92 105
Cincinnati 85 81 92 88
Lewis & Clark 85 81 92 88
Penn State-Dickinson 86 83 92 58
UNLV 86 83 93 67
Loyola-Chicago 86 83 92 73
Pitt 86 83 92 78
Stetson 86 78 93 111
Chicago-Kent 92 85 93 94
Nebraska 93 91 99 78
Rutgers 94 91 102 86
Drake 94 91 99 111
St. Louis 95 93 99 98
St. Thomas (Minnesota) 96 92 105 127
West Virginia 97 93 108 118
Michigan State 98 94 106 91
Louisville 99 94 104 94

As I’ve gone through “winners” and “losers” in previous posts on individual metrics, not all of those shake out in the final rankings, which include far more than just the isolated categories I looked at earlier. But some obvious winners emerge: Georgia, Texas A&M, and Villanova each see significant improvement, regardless of which version of the methodology is used.

You may well ask, “Why is X over Y?” or “How can A be ranked at B?” The answer is, I gave you the model weights and my assumptions, and this is the result it puts out. It’s all publicly available data, and, again, many schools are privately doing this already and know their areas of strengths and weaknesses as to other law schools.

At the end of the day, we’ll see how wrong I am.

But I think it’s also at least some sign that the shakeup, for most schools, may not be nearly as dramatic as one may suppose.

UPDATE 1/18/2023: I had originally thought I made an error with the calculations I used on the bar passage rates, but some schools have created some inconsistencies that I had to back and check. I was right the first time!