Law schools have an extraordinary moment to rethink law school admissions in light of USNWR methodology changes

The USNWR law rankings undoubtedly affect law school admissions decisions. A decade ago, I chronicled how law schools pursue lower-quality students (as measured by predicting first year law school GPA) to achieve higher median LSAT and UGPA scores to benefit their USNWR status.

While there is a lot of churn around the world of graduate school admissions at the moment—”test optional” or alternative testing policies, and the Supreme Court’s decision in Students for Fair Admissions v. Harvard, among other things—there’s a tremendous opportunity for law schools in light of the changes in the USNWR methodology changes. Opportunity—but also potential cost.

Let’s revisit how USNWR has changed its methodology. It has dramatically increased weight to outputs (employment and bar passage). It has dramatically decreased weight to inputs (LSAT and UGPA). Peer score also saw a significant decline.

But it’s not just the weight in those categories. It’s also the distribution of scores within each category.

The Z-scores below are from my estimated rankings for next spring. It is a mix of last year’s and this year’s data, so it’s not directly comparable. And of course USNWR can alter its methodology to add categories, change the weights to categories, or change how it creates categories.

The image below takes the “weighted Z-scores” in each quartile—the top-performing school in each category, the upper quartile, the median, the lower quartile, and the bottom. (A quartile is just under 50 law schools.) It gives you a sense of the spread for each category.

The y-axis shows the weighted values that contribute to the raw score. You’ll see a lot of compression.

At the outset, you’ll notice that the “bottom” school in each category can drop quite low. I noted earlier that the decision to add Puerto Rico’s three law schools to the USNWR rankings can distort some of these categories. There are other reasons to exclude the lower-ranking outliers, which I’ll do in a moment—after all, many schools that are in the “rank not published” category are not trying to “climb” the rankings, as they are pursuing a different educational mission.

The categories are sorted from the biggest spread to the smallest spread. (1) is “employed at 10 months.” (Note that this category turns on a USNWR formula for employment that is not publicly disclosed, so this is a rough estimate that relies heavily on the “full weight” and “zero weight” employment categories, which I’ll turn to in the next image.) (2) is next, which is first-time bar passage rate. That is a large spread, but nothing compared to employment. (3) (lawyer-judge score) and (4) (peer score) have a more modest spread quite close to one another. (5) is ultimate bar passage. Then (6) is student-faculty ratio. Only down to (7) do we get LSAT, and (8) UGPA. Note how compressed these categories are. There is very little spread from the top to bottom—or maybe more appropriate, top to lower quartile.

Let’s look at the chart another way, and this time with some different numbers. I eliminated the “bottom” and instead just left the top, upper quartile, median, and lower quartile categories. This meant the categories were slightly shuffled in a few places to show the top-to-lower-quartile spread. I then added the numbers of the schools in each category. These are not always precise as schools do not fall into precise quartiles and there can be ties, so there may be rounding.

The employment category (1) includes two figures—”full weight” jobs (full time, long term, bar passage required or JD advantage positions, whether funded by the school or not; and students in a graduate degree program), and student who are unemployed or unknown. For the quartiles, I averaged a handful of schools in the range I estimate the quartile to land to give a sense of where the school is—they are, again, not precise, but pretty good estimates. (More on these employment categories in a future blog post.)

You can see how much can change with very modest changes to a graduating student body’s employment outcomes. By shifting about 3 percentage points of a class from “unemployed” to a “full weight” job (in a school of 200, that’s 6 students), a school can move from being ranked about 100 in that category to 50.

Then you can compare, visually, that gap across other categories. Moving from 100 to 50 in employment is larger than the gap between a 153 median LSAT score and a 175 LSAT score (category (7)). It’s larger than an incoming class with a 3.42 median UGPA and a 3.95 UGPA (category (8)). It’s the equivalent of seeing your peer score rise from a 1.8 to a 2.9 (category (4)).

These are fairly significant disparities in the weight of these categories—and a reason why I noted earlier this year that it would result in dramatically more volatility. Employment outcomes dwarf just about everything else. Very modest changes—including modest increases in academic attrition—can change a lot quickly.

Now, visualizing the figures like this, I think it becomes more helpful to indicate why these weights do not particularly correlate with how one envisions the “quality” of a law school. For instance, if you are looking at the quality of a school, the rankings have become much less valuable for you to assess the quality of the institution. While this is sometimes comparing apples to oranges, I think that an LSAT median difference of 153 to 175 is much more meaningful than an employment outcome increase of 3 points. It’s one thing to say employment outcomes are 33% of the rankings. It’s another to see how they relate to other factors. Likewise, if I am a prospective employer trying to assess the quality of a school that I may not know much about, the new USNWR methodology is much less helpful. I care much more about the quality of students than these marginal changes in employment—that, recall, classify everything from a Wachtell associate position to pursuit of a master’s degree in that same law school’s graduate program as the same.

First-time bar passage rate (category (2)) matters a great deal, too. Outperforming state jurisdictions by 21 points puts you at the top of the range. Outperforming them by 10 points at the upper quartile, and by 2 points at the median. It is harder, I think, to increase your bar passage rate by 8 points compared to the statewide averages of states where graduates take the bar. But there’s no question that a “good” or “bad” year for a law school’s graduates can swing this category significantly. And again, look at how wide the distribution of scores is compared to the admissions categories in (7) and (8).

You can see ultimate bar passage (5) and its relationship to LSAT (7) and UGPA (8). Recall earlier that I blogged about ultimate bar passage rate, and how just a few more students passing or failing the bar is the equivalent of dramatic swings in admissions statistics.

The student-faculty ratio (6) is a fairly remarkable category, too. It’s probably not possible for schools to hire significant numbers of faculty to adjust this category. But given that the ratio is based on total students, schools can try to massage this with one-year admissions changes to shrink the class. (More on admissions and graduating class sizes in a future post.) (Setting aside thoughts of how adjuncts play into this ratio, of course.)

(Those last two categories on library resources and acceptance rate are largely too compressed to mean much.)

I appreciate your patience through this discourse on the new methodology. But what does this have to do with admissions?

Consider the spread in these scores. They show that focusing on outputs (employment and bar passage) matters far more than inputs. The figures here show that in numerical terms. So that means law schools need to rethink admissions (if they value USNWR rankings) as less about those two categories and more about what the incoming class will do after they graduate.

Law schools could favor a number of things over the traditional chase of LSAT and UGPA medians. Some law schools already do this. But the point of this post is to identify that it now makes sense for schools to do so if they desire to climb the USNWR rankings. Admissions centered on LSAT and UGPA are short-term winners and long-term losers. Long-term winning strategy looks at prospective students with the highest likely positive outcomes.

Some possible changes to admissions strategy are likely positive:

  • Law schools could rely more heavily on the LSAC index, which is more predictive of student success, even if it means sacrificing a little of the LSAT and UGPA.

  • Law schools could seek out students in hard sciences, who traditionally have weaker UGPAs than other applicants.

  • Law schools can consider “strengthening” the “bottom” of a prospective class if it knows it does not need to “target” a median—it can pursue a class that is not “top heavy” or have a significant spread in applicant credentials from “top” to “bottom.”

  • Law schools can lean into need-based financial aid packages. If pursuit of the medians is not as important, it can afford to lose a little on the medians in merit-based financial aid and instead use some of that money for need-based aid.

  • Law schools could rely more heavily on alternative tests, including the GRE or other pre-law pipeline programs, to ascertain likely success if it proves more predictive of longer term employment or bar passage outcomes.

There are items that are more of a mixed bag, too—or even negative, in some contexts (and I do not suggest that they are always negative, or that schools consistently or even infrequently use them that way). Those include:

  • Law schools could interview prospective students, which would allow them to assess “soft factors” relating to employment outcomes—and may open the door to unconscious biases, particularly with respect to socioeconomic status.

  • Law schools could more aggressively consider resume experience and personal statements to determine whether the students have a “fit” for the institution, the alumni base, the geography, or other “soft” factors like “motivation.” But, again, unconscious biases come into play, and it’s also quite possible that these elements of the resume redound to the benefit of those who can afford to pay for a consultant or have robust academic advising over the years to “tailor” their resumes the “right” way.

  • Law schools could look for prospective students with prior work experience as likely to secure gainful employment after graduation. But, if law schools look to students who already have law firm experience (say, from a family connection), it could perpetuate legacy-based admissions.

All of this is to say, there is an extraordinary moment right now to rethink law school admissions. USNWR may disclaim that it influences law school admissions by its methodology, but revealed preferences of law schools demonstrate that they are often driven by USNWR, at least in part. The change in methodology, however, should change how law schools think about these traditional practices. There are pitfalls to consider, to be sure. And of course, one should not “chase rankings”—among other things, the rankings methodology can shift on schools. But if it’s possible to think that there are better ways of doing admissions that have been hamstrung (in part) by a median-centric USNWR methodology, this post suggests that it is the right time to do so.