Trying (unsuccessfully) to account for law school expenditures under the USNWR rankings formula

The most opaque elements of the USNWR law school rankings relate to expenditures. They are 10% of the rankings. USNWR does not disclose them. ABA does not disclose comparable figures. You can access them if you purchase access to the “Academic Insights” platform, but they are not otherwise publicly available (or available for any public distribution).

Nevertheless, they are probably the most significant factor as they relate to rankings. The gap between the top end and bottom end is significant. That gives schools the most opportunity to outperform others and climb in the weighted, scaled scores. They are also the least sticky. While peer reputation is notoriously sticky, and a whopping 25% of the ranking, it contributes very little volatility.

(This is not a new concern. Professor Brian Leiter and Professor Corey Rayburn Yung are just two recent examples of law faculty pointing to these problems.)

USNWR collects information from law schools on “expenditures.” (More on this undefined word in a moment.) It asks for “Instructional salaries,” “Administrative and student services salaries,” “Library salaries,” “Other salaries not included elsewhere,” “Fringe benefits,” “Law school expenses (exclude library),” “Financial aid,” and “Library operations.” There is a separate line for “indirect expenditures and overhead.” Law schools are instructed to “exclude expenditures for the LLM program.” That’s about all the instruction law schools are given.

The methodology reports that they calculate per-student metrics: “The average spending on instruction, library and supporting services (0.09) and the average spending on all other items, including financial aid (0.01): The faculty resources calculation for instruction, library and supporting services is adjusted for cost of living variations in law school salaries between school geographic locations by using publicly available Bureau of Economic Analysis Regional Price Parities index data.”

Some categories seems fairly straightforward. “Instructional salaries,” for instance.

But “Law school expenses,” an open-ended catch-all, remains a decidedly underdefined category, and a crucial (often large) component of the weight of the rankings worth 9%. Other “overhead” and adjacent expenses could fall into the “indirect” expenditures, which are weighted at just 1% of the rankings. The scandal exposed in 2005 revealed everything from water bills to the “market value” of LexisNexis accounts have been stuffed into these metrics. (And there’s a whole separate and complicated issue about how law schools “count” their physical building in these “expenditure” metrics.)

One might think that the word “expenditures” means dollars out the door, but it’s been obvious for years that accounting methods (including depreciation or other cost-basis allocation) have been a part of how schools calculate “expenditures.”

But many law schools report extraordinary expenditures. How extraordinary?

Imagine we say that a law school could calculate X dollars in revenue from “gross tuition.” That is, assuming no scholarships and no “discount rate” (which is a totally implausible assumption, I note), let’s figure out how much money a law school might collect from total tuition from its students.

Many schools report “expenditures” of 2X that number. And some even higher.

How is that even possible? Let’s try another small experiment. Suppose we calculate this in a real dollar total. If a school has, say, 600 students and $50,000/year tuition, we can say that “gross tuition” revenue is $30 million. (Again, an implausible assumption, but work with me.) A school reporting expenditures of $60 million, or $75 million, would be burning an extraordinary amount of additional, outside capital each year if it was truly “expenses.”

Law schools are usually cagey about revealing their endowments. We have some figures at the high end: Harvard Law in 2008 reported an endowment of $1.7 billion, which at 5% would spin off $85 million a year. (Of course, Harvard has around 1700 JD students, not 600.) Yale Law in 2009 reported an endowment of $1.2 billion, which would spin off around $60 million.

But most endowments are much more modest. Emory Law in 2016 reported an endowment of $43 million that would spin off $1.7 million a year—and most of that went toward scholarships (i.e., the 1% category, not the 9% category). The University of Texas Law Foundation presently reports an endowment of more than $150 million, which would spin off around $7.5 million. (At any of these institutions, parts of the endowment may be held by the parent university for use at the law school, so the figures could be much larger.)

It’s possible, of course, that many law schools earn a tremendous amount in one-time or annual gifts, among other outlets. And it’s also possible that parent universities are much more heavily subsidizing their law schools than they’ve otherwise been letting on. Finally, it’s also possible that law schools with rising non-JD programs are funneling the profits from those programs into the JD program.

These possibilities seem limited. I don’t know many schools with small endowments are consistently pulling in extremely large gifts each year. Parent universities can help subsidize in some cases and in limited circumstances, but even these rarely go beyond a few million dollars on an annual basis in the most extreme cases. And it’s not clear how expansive these non-JD programs are—certainly not so expansive at most universities to subsidize a JD program so heavily.

And, as I mentioned, this is an implausible baseline. Most schools have a fairly significant “discount rate” and offer a significant number of scholarships to students, offsetting tuition and “indirect” expenditures. So schools not only need revenue for these “indirect” expenditures, but then they’re also spending far more than whatever the X, 2X, etc. “gross tuition” baseline is.

In short, then, almost however you look at it, it’s impossible to account for law school expenditures under the USNWR rankings, except that there are accounting assumptions being made that bear no relation to the reality of the quality fo the education students receive.

This grows more obvious and pronounced each year. But, like many critiques of the rankings, it seems unlikely to go anywhere anytime soon.

USNWR has erratically chosen whether "statewide bar passage" rate includes only ABA-approved law schools over the years

I was directed to the fact that the new USNWR bar exam metric includes “the weighted state average among ABA accredited schools' first-time test takers in the corresponding jurisdictions in 2020.” “ABA accredited” was added. Didn’t the first-time bar exam passage rate only include ABA accredited schools in the past?

Previous methodology looked at the modal state where a law school’s graduates took the bar exam, and the “jurisdiction's overall state bar passage rate for first-time test-takers in winter and summer” of that year.

I looked at the 2022 rankings (released in 2021, using the 2019 bar exam data). I picked California, known for its significant cohort of non-ABA test-takers. The overall first-time pass rate was 59%, but the first-time pass rate among ABA accredited schools was 69%. (Historical stats are here.) USNWR used the 59% rate.

That first surprised me. I had assumed USNWR only used ABA accredited data. It also made me think that California schools would be harmed the most by this shift in metrics (even if I think it’s more accurate). That’s because California schools are less likely to “overperform” if the pass rate is higher (e.g., using only ABA accredited test-takers instead of all test-takers).

But then I dug further.

The 2021 rankings (released in 2020, using 2018 bar exam data) reported California’s first-time bar pass rate as 60%. The ABA first-time rate was 60%. But the overall rate was 52%. So in this year, USNWR used only ABA accredited schools.

The 2020 rankings (released in 2019, using 2017 bar exam data) reported a first-time pass rate of 58%. That’s the same as the overall first-time pass rate of 58%, not the 66% from ABA accredited law schools. So in this year, USNWR used overall first-time pass rates. And it appears USNWR did the same in 2019 (released in 2018, using 2016 bar exam data).

In short, there does not appear to be any reason why USNWR has used one method or another over the years. Certainly, this year it is expressly using only ABA data, and maybe it intends to stick with that going forward. But it’s another, subtle change that could adversely affect those schools (e.g., California) with a significant cohort of non-ABA test-takers. It’s probably the right call. But it also highlights the inconsistency of USNWR in its methodology over the years.

USNWR law school voters sank Yale Law and Harvard Law for the first time in rankings history.

The USNWR “peer score” is the single most heavily-weighted component of the law school rankings. USNWR surveys about 800 law faculty (the law dean, the associate dean for academics, the chair of faculty appointments and the most recently-tenured faculty member at each law school). Respondents are asked to evaluate schools on a scale from marginal (1) to outstanding (5). There’s usually a pretty high response rate—this year, it was 69%.

Until this year, Yale & Harvard had always been either a 4.8 or 4.9 on a 5-point scale in every survey since 1998.

But this year, Harvard's peer score was a 4.7. And Yale's was a 4.6.

What precipitated the drop (e.g., Harvard could be close to a rounding error, it may have been a 4.76 in the past and a 4.74 in the past) is anyone’s guess. But respondents do tend to react to certain influences, it would seem, and one could only speculate what might have prompted such responses in the fall 2021 or early 2022 when this cohort was surveyed.

Some dramatic swings as USWNR introduces new bar exam metric

The latest USNWR law school ranking has some significant swings in the bar exam component. It made three significant changes: increasing the weight from 2.25% to 3%, and measuring “all graduates who took the bar for the first time,” and including graduates who were admitted via diploma privilege in both a school’s passers and the overall passers. From the methodology:

Specifically, the bar passage rate indicator scored schools on their 2020 first-time test takers' weighted bar passage rates among all jurisdictions (states), then added or subtracted the percentage point difference between those rates and the weighted state average among ABA accredited schools' first-time test takers in the corresponding jurisdictions in 2020. This meant schools that performed best on this ranking factor graduated students whose bar passage rates were both higher than most schools overall, and higher compared with what was typical among graduates who took the bar in corresponding jurisdictions.

For example, if a law school graduated 100 students who first took the bar exam – and 88 took the Florida exam, 10 the Georgia exam and two the South Carolina exam – the school's weighted average rate would use pass rate results that were weighted 88% Florida, 10% Georgia and 2% South Carolina. This computation would then be compared with an index of these jurisdictions' average pass rates – also weighted 88-10-2. (For privacy, school profiles on usnews.com only display bar passage data for jurisdictions with at least five test-takers.) Both weighted averages included any graduates who passed the bar with diploma privilege. Diploma privilege is a method for J.D. graduates to be admitted to a state bar and allowed to practice law in that state without taking that state's actual bar examination. Diploma privilege is generally based on attending and graduating from a law school in that state with the diploma privilege.

In previous editions, U.S. News divided each school's first-time bar passage rate in its single jurisdiction with the most test-takers by the average for that lone jurisdiction. This approach effectively excluded many law schools' graduates who took the bar. Dividing by the state average also meant the location of a law school impacted its quotient as much as its graduates' bar passage rate itself. The new arithmetic accounts for average passage rates across all applicable jurisdictions as proxy for each exam's difficulty and reflects that passing the bar is a critical outcome measure in itself.

The new methodology really changes the results for two kinds of schools. (The increase in the weight from 2.25% to 3% obviously also benefits schools that do well and harms schools more that do poorly.)

First, it benefits good schools in jurisdictions with tougher bars and strong out-of-state placement.

Second, it harms Wisconsin’s two law schools.

Let’s start with the first. Which schools benefited most from 2022 (measuring the 2019 bar) to 2023 (measuring the 2020 bar)? (These charts exclude a handful of schools that did not include their bar passage statistics this time around.)

  Pass rate 2019 Jurisdiction rate Cumulative pass rate 2020 Cumulative jurisdiction rate USNWR score delta
San Francisco 38.7% CA 59% 78.4% 78% 0.0497
William & Mary 86.7% VA 78% 96.9% 81% 0.0323
Washington & Lee 80.0% VA 78% 92.6% 81% 0.0317
Emory 84.5% GA 77% 91.7% 78% 0.0295
Minnesota 94.0% MN 81% 98.9% 82% 0.0279
Georgia 94.5% GA 77% 94.4% 76% 0.0272
Kentucky 78.4% KY 75% 90.6% 80% 0.0266
Montana 88.9% MT 85% 92.5% 82% 0.0255
Penn State-Dickinson 88.5% PA 80% 91.7% 79% 0.0249
Drexel 77.1% PA 80% 84.0% 78% 0.0249

On the left are the school’s pass rate in 2019 with its modal jurisdiction, and that jurisdiction’s pass rate. Next is cumulative pass rate in 2020 along with the cumulative jurisdiction rate. Finally is the delta of the USNWR score—how much better the school did this year compared to last year in the weighted Z-score.

(I noted last year that we saw major swings at some schools in 2020. We see how those are playing out here.)

The University of San Francisco saw a tremendous improvement in California of almost 40 points (aided in part by a lower cut score in California in 2020). But the next three schools are telling. William & Mary and Washington & Lee are strong schools in a very tough bar exam market (Virginia is one of the toughest bars in the country), and Emory in Georgia in an above-average difficulty bar. Each did reasonably well in 2019. But when adding in performances in other jurisdictions, their scores climbed. ABA data shows W&M went 15-for-15 in DC, 15-for-15 in Maryland, and 10-for-10 in New York. All were excluded in the old metrics; all are easier bars than Virginia. W&L grads went 13-for-14 in DC, 13-for-13 in North Carolina, and 9-for-10 in New York. Emory went 21-for-21 in New York and 11-for-11 in Florida.

In other words, a diffuse and successful bar exam test-taking base redounds to the benefit of these schools.

Let me add one more detail. The new methodology puts law schools closer to parity with one another when comparing bar passage rates, especially those outside the “outliers.” The more graduates you have taking the bar, across jurisdictions, the less likely the difficulty of the bar matters in the end; and the inclusion of “diploma privilege” (or adjacent) admissions lifts the results. The 2019 “denominator” of the bar exam ranged from 55% at the low end of law schools (i.e., Maine) to 87% at the top end (i.e., Kansas), a gap of 32 points. That shrunk a bit in 2020 with the new methodology, from 70% to 99% (29 points). But the difference between the 10th and 90th percentiles shrunk significantly, from 2019 (61% and 81%, 20 points) to 2020 (75% to 86%, 11 points). In other words, there differences between the 19th and 168th law schools in terms of their “jurisdiction pass rate” was about half as much in the “overall pass rate” this year compared to last year.

Let’s look at the worst-performing schools.

  Pass rate 2019 Jurisdiction rate Cumulative pass rate 2020 Cumulative jurisdiction rate USNWR score delta
Western State 56.7% CA 59% 51.7% 78% -0.0688
Ohio Northern 95.7% OH 79% 66.7% 81% -0.0658
Golden Gate 43.9% CA 59% 44.1% 78% -0.0620
Faulkner 81.8% AL 77% 60.7% 79% -0.0584
Marquette 100.0% WI 71% 98.2% 99% -0.0538
Southern Illinois 59.4% IL 79% 50.6% 82% -0.0513
Wisconsin 100.0% WI 71% 100.0% 99% -0.0497
CUNY 74.5% NY 74% 66.7% 86% -0.0493
Pepperdine 81.0% CA 59% 78.6% 78% -0.0455
Pace 76.0% NY 74% 69.6% 85% -0.0422

You can see that several schools performed worse, or relatively worse, compared to their 2019 figures (again, consistent with what I noted earlier, major swings at some schools in 2020). But note outliers. Marquette (98.2%) and Wisconsin (100%) both have extraordinarily high bar passage rates, due principally to in-state diploma privilege.

In the past, this redounded to their benefit, as ordinary test-takers who took the bar exam performed substantially lower than 100% (see 71% in 2019), giving them a huge advantage. The new USNWR methodology, however, includes all of those diploma privilege admittees as “passers” in cumulative jurisdiction’s pass rate, too. Wisconsin and Marquette used to perform 30 points above the average; they’re now basically at the average.

In one sense, there’s a greater honesty to the metric in comparing similarly-situated graduates to one another. But it comes at the cost of punishing two schools whose graduates are all (or nearly all) immediately able to practice law. That’s a tremendously valuable outcome for law students.

It might be beneficial for USNWR to instead include two factors, absolute passers and relative passers (like this one). Some (especially California deans!) critique an “absolute” passer rate that lacks accounting for the difficulty of the bar. But if we care about law students’ ability to practice law, it seems to me that it’s important to capture whether your graduates are successfully getting students there, regardless of how hard or easy the bar exam is. (Of course, relative performance also should matter, I think, at least to some degree, as it suggests that some schools are improving opportunities for their graduates.) I confess, others would disagree.

How did other schools, like those in Utah, Washington, or Oregon, not perform much better or worse despite emergency “diploma privilege” being introduced? Recall it’s a mixed bag, depending on the school and the state, history and out-of-state test-takers. February 2020 did not have such exemptions, which are partially included in the figures above. Utah and Oregon still had a decent set of in-state test-takers, as diploma privilege did not extend to everyone—but schools in those states didn’t see as dramatic changes in overall passing rates, as in both states they were keyed to pre-set levels of test-taker success (86%, with an exception in Oregon for in-state schools), and that meant most people taking the test would have passed, anyway. Washington, in contrast, opened up diploma privilege to essentially all test-takers, and the corresponding increase in passers put the University of Washington near the bottom of changes from 2019 to 2020 (suffering something that Wisconsin and Marquette experienced this year).

It’s a seemingly small change in methodology, and it’s hard to know what a number like “0.0497” means to an overall score. But it’s worth identifying that the changes are not value-neutral and can affect similarly-situated schools quite differently.

Federal judges have already begun to drift away from hiring Yale Law clerks

On the heels of the latest controversy at Yale Law School, which David Lat ably describes over at Original Jurisdiction, a federal judge penned an email to fellow judges: “The latest events at Yale Law School, in which students attempted to shout down speakers participating in a panel discussion on free speech, prompt me to suggest that students who are identified as those willing to disrupt any such panel discussion should be noted. All federal judges—and all federal judges are presumably committed to free speech—should carefully consider whether any student so identified should be disqualified from potential clerkships.”

The truth is, Yale Law has already seen falling clerkship placement numbers in recent years. Incidents like this may harden some judges’ opposition. (There are caveats, of course, about what factors affect a judges hiring practices, the political salience of the issues here, and so on.)

I closely track federal judicial clerkship placement, and I have in recent years included a three-year average of clerkship placement in a report I release every two years. The latest version of that report is here. But we can look at some trends among a handful of schools. I select eight of the (historically) highest-performing: Yale, Stanford, Chicago, Harvard, Duke, Virginia, Michigan, and UC-Irvine. I’ll look at the last eight years’ placement. (Any choice of schools and window of time is a bit arbitrary, and I could go back for more data or more schools if I wanted. I didn’t look at 2012 or earlier data, so I don’t know what I’m missing with this cutoff.)

Let me start by pointing out that the total placement among recent graduates has been fairly steady (see the chart). Schools report between 1150 and 1250 placements per year.

Some declines may well be attributable to vacancies in the federal judiciary that were unfilled. It does not appear that there is a “trend” of hiring materially fewer recent law school graduates in favor of clerks with work experience.

But this means that there’s roughly a fixed set of possible clerkship positions each year. If some schools are declining in placement we would expect to see other schooling improvement in placement. We can’t necessarily make those as one-to-one tradeoffs (e.g., a judge “stops” hiring from Yale and “starts” hiring from Chicago), but we can watch some aggregate trends.

I’ll start with percentage of graduates placed into a full-time, long-term federal clerkship. Admittedly, this doesn’t capture those who work then clerk. But there is some consistency in the reporting of data over the years. It makes no distinction among competitiveness of clerkships or types of judges (e.g., appellate or district court). Percentages can also fluctuate with the class size or be deceptive based on class size; I’ll dig into the raw figures in a moment.

A few items stand out. Yale would typically place between 25% and 35% of its class into federal clerkships. Its number is low in 2020, but not the lowest in this time period. A couple of times, Stanford has placed a higher percentage of clerks than Yale.

But noteworthy is Chicago’s climb, from 10% of the class in 2013 to a whopping 27.6% in 2020, for the first time in recent memory besting Yale.

A few other trends are noteworthy. Apart from Irvine’s decline (which may coincide with the departure of founding Dean Erwin Chemerinsky), we see that the University of Virginia placing fourth with 17.5% placement. It’s done well in recent years, including occasionally edging out Harvard, but (apart from a 2017 dip) shows a trendline of consistent and perhaps improving placement.

Let’s now look at the raw totals of placement. Recall that these figures are going to help assess placement into the market of roughly 1150 to 1250 total new clerks a year.

Harvard tops the list, as its 15-20% placement into clerkships still means a whopping 80 to 120 clerks a year, given its tremendous class size. But, it is notable to see it at an eight-year low in placement. Yale, which had consistently been second in raw placement for the previous seven years, has slipped to fourth in 2020, as both Chicago (56) and Virginia (55) placed more federal clerks than Yale (52).

Now, it’s perhaps no coincidence that Yale graduates just 197 students in the Class of 2020, its smallest class in this eight-year period, and perhaps correspondingly saw a decline in overall placement in different ways. Still, federal judges needed clerks in 2020. They simply looked elsewhere at slightly higher rates.

But at a larger level, it’s worth noting that federal judges do change their hiring preferences, and we may be witnessing some of that right now, regardless of whether some judges are “investigating” whether some graduates of some law schools have acted in a disruptive manner at a public event. There are, of course, any number of reasons why federal judges looked elsewhere, returning to a point at the top of this post. It could be that law students at some law schools, more than others, are self-selecting out of applying to federal judges (option for lucrative large law firm placement, competitive government positions, or the booming public interest sector).

And finally, it could also be that this blip is hardly a “trend,” and we’ll wait for a month to see what the Class of 2021 figures show.

Are law schools prepared for a multi-year cycle of substantial declines in 1L enrollment?

While 1L JD enrollment hit a nine-year high this year, Dean Paul Caron has been tracking the declining figures for the upcoming year. Maybe that’s not too bad for law schools, given that they’ve seen this growth and are, perhaps, a bit more able to insulate it against it next year. Applicants might end up looking more like 2020 levels rather than 2021 levels.

But another thing to watch? Test-takers are cratering. LSAT test-takers could be applying in this cycle or for a future cycle, of course. But first-time LSAT LSAT test-takers for August 2021 dropped from 17,113 in August 2022 to 15,888. Again, many of these are likely applying for the Fall 2022 admissions class. October 2020 first-time test-takers were 11,868; that fell to 10,813 in October 2021.

It’s been getting worse. November saw a drop from 12,504 in 2020 to 10,010 in 2021. And January went from 11,313 first-time test-takers in 2021 to just 7244 in 2022.

The later in the cycle it gets, the more these tests are likely to portend figures for the next cycle. And they suggest a fairly significant decline for the Fall 2023 cycle.

Which law schools have grown and shrunk the most since 2015?

Law school 1L enrollment bottomed out in 2015, and it has seen a fairly steady (if not always consistent) improvement ever since. 1L enrollment in 2015-2016 was 37,071, the lowest since 1973-1974 (when there were about 50 fewer ABA-accredited law schools). Enrollment has risen to 41,710, up 12.5% from that low.

What schools have grown the most since then? And which have shrunk? A few schools have closed, of course, or have given up ABA accreditation.

27 schools have classes at least 50% larger than 2015, with 6 law school growing more than 100%. It’s a mix of newer schools growing, reorganized schools rebounding, strategic shrinking in 2015 now rebounding, new capital for schools to grow, capitalizing on other regional law school closures, and other assorted reasons, I’d say. (There’s also year-to-year fluctuation, and there’s no guarantee these schools were at their lowest ebb in 2015.)

  1Ls 2015 1Ls 2021 Delta
Charleston 85 270 217.6%
New Hampshire 76 219 188.2%
Liberty 51 133 160.8%
Lincoln Memorial 50 125 150.0%
Seton Hall 151 354 134.4%
Toledo 70 159 127.1%
New England Law | Boston 198 392 98.0%
Widener-Commonwealth 66 125 89.4%
Appalachian 33 62 87.9%
Loyola-Chicago 206 381 85.0%
UMass-Darmouth 71 126 77.5%
Northeastern 140 239 70.7%
Albany 126 215 70.6%
Widener-Delaware 154 261 69.5%
George Mason 157 262 66.9%
Baylor 131 213 62.6%
Southern 210 340 61.9%
Belmont 89 144 61.8%
Tulsa 86 139 61.6%
Connecticut 96 153 59.4%
Nebraska 102 161 57.8%
Pace 198 312 57.6%
Texas 265 417 57.4%
St. Thomas (Minnesota) 103 162 57.3%
Gonzaga 127 193 52.0%
Boston College 234 353 50.9%
Penn State-Dickinson 64 96 50.0%

Schools that shrank were less common and did so less dramatically. 13 schools saw 1L enrollment declines of at least 15% between 2015 and 2021, and only one exceeding 50%. (Again, there’s year-to-year fluctuation, or temporary decisions to reduce class size to offset a recent larger class, among other reasons.)

  1Ls 2015 1Ls 2021 Delta
Western Michigan University 448 188 -58.0%
Atlanta's John Marshall 239 130 -45.6%
Puerto Rico 198 124 -37.4%
San Francisco 205 148 -27.8%
Southern Illinois 121 89 -26.4%
Florida 310 241 -22.3%
District of Columbia 93 74 -20.4%
Florida A&M 159 130 -18.2%
Richmond 175 145 -17.1%
Michigan State 280 234 -16.4%
Western State College of Law 131 110 -16.0%
Colorado 204 172 -15.7%
Northern Kentucky 169 143 -15.4%

Non-LSAT standardized test scores in admissions remain concentrated at a handful of schools

The ABA requires law schools to disclose when they have 10 or more enrolled students in an incoming class who were admitted using a standardized test other than the LSAT; and the 75th, 50th, and 25th percentile scores of that cohort. Sixteen schools reported at least 10 students admitted under such programs: 12 using the GRE, 3 using the ACT, and 1 using the GMAT.

Northwestern had 11 students admitted under the GMAT, 4.7% of the class.

BYU (19 students, 14.3%), Northern Illinois (10 students, 8.5%), and Georgia (10 students, 5.2%) were the three schools with ACT admissions.

And the 12 schools using the GRE:

Hawaii (24, 24.5%)

Arizona (20, 16.4%)

Penn State-Dickinson (10, 10.8%)

Harvard (36, 9.7%)

New Hampshire (20, 9.1%)

Georgetown (44, 8.5%)

Cornell (10, 5.3%)

Cal Western (12, 5.0%)

Columbia (18, 4.2%)

NYU (16, 3.5%)

Boston College (12, 3.5%)

Penn (10, 3.5%)

That’s 232 students admitted at these 12 schools under the GRE. Other schools may have admitted GRE students, but in smaller numbers. UPDATE: The ABA has supplemented this data, which will be the subject of another post.

Another interesting question to consider—do admissions for these students look different than LSAT admissions?

This is a complicated question, and it actually reveals a difference between ETS and USNWR. The bottom line, before you read everything below, is that USNWR appears to treat the GRE materially worse than ETS recommends; and it appears that admissions are, on the whole, a bit easier for GRE students.

USNWR converts GRE scores to percentile equivalents and weighs them against LSAT percentile equivalents. (It apparently does not do so for GMAT or ACT scores.) Now, it’s worth emphasizing that it does not appear that this is how ETS’s own studies of the validity of the GRE worked compared to the LSAT, as detailed in the ABA-commissioned study to examine the ETS report. (More on that in a moment.)

Here’s USNWR’s methodology:

These are the combined median scores on the LSAT and GRE quantitative, verbal and analytical writing exams of all 2020 full- and part-time entrants to the J.D. program. Reported scores for each of the four exams, when applicable, were converted to 0-100 percentile scales. The LSAT and GRE percentile scales were weighted by the proportions of test-takers submitting each exam. For example, if 85% of exams submitted were LSATs and 15% submitted were GREs, the LSAT percentile would be multiplied by 0.85 and the average percentile of the three GRE exams by 0.15 before summing the two values. This means GRE scores were never converted to LSAT scores or vice versa. There were 60 law schools – 31% of the total ranked – that reported both the LSAT and GRE scores of their 2020 entering classes to U.S. News.

It’s not clear where percentile equivalents come from, but the latest LSAC percentile equivalent tables cover 2019-2020, whereas the latest ETS percentile equivalent tables cover 2017-2020.

The first thing we can do, then, is to look at the 50th percentile LSAT scores for the incoming classes at each of these law schools:

  LSAT 19-20 pct
Harvard 174 99.2%
Columbia 174 99.2%
NYU 172 98.4%
Georgetown 171 97.8%
Cornell 171 97.8%
Penn 171 97.8%
Boston College 165 89.8%
Arizona 163 85.0%
Penn State - Dickinson 161 80.1%
New Hampshire 158 70.4%
Hawaii 156 62.9%
Cal Western 153 51.7%

Next, sticking with the same order, let’s look at the 50th percentile of the GRE-V, GRE-Q, and GRE-AW scores. At the end, I aggregate them, weighing each 1/3 like USNWR does, to give an overall percentile total. (There is tremendous compression in the GRE-AW scores, as you can readily see.)

  GRE-V 17-20 pct GRE-Q 17-20 pct GRE-AW 17-20 pct GRE pct
Harvard 165 96% 158 64% 5 91% 83.7%
Columbia 167 98% 163 79% 5 91% 89.3%
NYU 155 67% 153 46% 4 54% 55.7%
Georgetown 159 82% 158 64% 4.5 80% 75.3%
Cornell 153 59% 152 43% 4 54% 52.0%
Penn 166 97% 162 76% 5 91% 88.0%
Boston College 165 96% 163 79% 5 91% 88.7%
Arizona 162 90% 157 61% 4.5 80% 77.0%
Penn State - Dickinson 152 53% 149 32% 4.5 80% 55.0%
New Hampshire 162 90% 163 79% 5 91% 86.7%
Hawaii 153 59% 146 21% 4.5 80% 53.3%
Cal Western 162 90% 161 74% 5 91% 85.0%

Now I’ll compare those percentile equivalents with the LSAT percentiles to see if the admitted students in the GRE cohort have a higher or lower composite percentile equivalent than the LSAT cohort. One more feature: I’ll reverse-engineer the GRE percentile equivalent to its LSAT score (rounding to the nearest LSAT score, or in one place identifying the two scores it falls between) for some idea of what it looks like. UPDATE: I’ve been told that USNWR does not weigh the GRE sections equally. But because it does not disclose its methodology, I am awaiting an answer on how it does weigh them.

  LSAT 19-20 pct GRE pct Delta LSAT equivalent
Harvard 174 99.2% 83.7% -15.5 162
Columbia 174 99.2% 89.3% -9.9 165
NYU 172 98.4% 55.7% -42.7 154
Georgetown 171 97.8% 75.3% -22.5 159/160
Cornell 171 97.8% 52.0% -45.8 153
Penn 171 97.8% 88.0% -9.8 164
Boston College 165 89.8% 88.7% -1.1 165
Arizona 163 85.0% 77.0% -8.0 160
Penn State - Dickinson 161 80.1% 55.0% -25.1 154
New Hampshire 158 70.4% 86.7% 16.3 164
Hawaii 156 62.9% 53.3% -9.6 153
Cal Western 153 51.7% 85.0% 33.3 163

One more: ETS also has a “comparison tool” for law schools. As mentioned, the ETS methodology for the relationship between the LSAT and the GRE is not the one USNWR is using. Instead, it is based on a different methodology that, as far as I know, has not been disclosed. But you can use an ETS calculator to determine a “predicted LSAT score,” within “5 points.” The calculator only uses the GRE-V and GRE-Q. What outputs do we get from this proprietary engine for each school?

  LSAT GRE-V GRE-Q Predicted LSAT
Harvard 174 165 158 168
Columbia 174 167 163 172
NYU 172 155 153 157
Georgetown 171 159 158 163
Cornell 171 153 152 156
Penn 171 166 162 171
Boston College 165 165 163 169
Arizona 163 162 157 165
Penn State - Dickinson 161 152 149 154
New Hampshire 158 162 163 168
Hawaii 156 153 146 153
Cal Western 153 162 161 167

Note that the ETS “predicted” LSAT is higher—in some cases, much higher—than the USNWR formula.

At many schools, it’s still lower than the median LSAT score, but often not as dramatically different; at a few others, it’s higher (and at more than just the two schools in the USNWR table above).

So if schools were going by the ETS data and admitting comparable students, they will be penalized by USNWR, which has adopted a methodology that ETS does not use:

Why not compare the reported percentiles for the GRE and LSAT?

Percentiles represent how a test taker performed relative to other test takers who recently took the same test. The current test taker populations for the GRE General Test and LSAT exam are likely different in terms of background and ability, so the percentiles calculated for each test based on those different populations are not directly comparable. Additionally, reported percentiles for any test vary over time as the test taking population changes. Both GRE and LSAT scores are meant to be consistent across time and changes in test taking populations, so the most consistent and accurate comparisons are based on the statistical relationship between the reported scores, as provided by the Comparison Tool.

It appears that at many schools, the GRE cohort has lower incoming student credentials than the LSAT cohort. How much that affects USNWR rankings depends on the size, of course.

Now, at most of these schools, it’s a relatively small part of the incoming class. It might be that these are disproportionately joint degree students, for example. At institutions with n=10, it tells us very little about the overall class. We also don’t see all the other schools with n<10 and what they’re doing. But it’s worth noting that GRE admissions do not appear to be taking place in lockstep with LSAT admissions at most institutions.

This post has been updated and modified based on feedback received. Thanks for helpful suggestions.

Annual Statement, 2021

Site disclosures

Total operating cost: $192

Total content acquisition costs: $0

Total site visits: 65,305 (89,497) (-27% over 2020)

Total unique visitors: 55,492 (76,237) (-27% over 2020)

Total pageviews: 80,606 (110,074) (-27% over 2020)

Top referrers:
Twitter (3292)
Leiter’s Law School Reports (1421)
Facebook (1053)
TaxProf Blog (659)
Reddit (478)
Reason (124)

Most popular content (by pageviews):
Ranking the most liberal and conservative law firms (July 16, 2013) (7937)
California’s “baby bar” is not harder than the main bar exam (May 28, 2021) (7185)
What does it mean to “render unto Caesar”? (May 3, 2020) (6846)
Ranking the most liberal and conservative law firms among the top 140, 2021 edition (November 8, 2021) (5385)
Which law schools have the best and worst debt-to-income ratios among recent graduates? (Nov. 21, 2019) (2705)
Scrutinizing one voter fraud allegation: did 42,000 people vote more than once in Nevada in 2020? (December 23, 2020) (2361)

I have omitted "most popular search results" (99% of search results not disclosed by search engine, very few common searches in 2021).

Sponsored content: none

Revenue generated: none

Disclosure statement

Platform: Squarespace

Privacy disclosures

External trackers: one (Google Analytics)

Individuals with internal access to site at any time in 2021: one (Derek Muller)

*Over the course of a year, various spam bots may begin to visit the site at a high rate. As they did so, I added them to a referral exclusion list, but their initial visits are not disaggregated from the overall totals. These sites are also excluded from the top referrers list. Additionally, all visits from my own computers are excluded.