Which law schools have the best and worst debt-to-income ratios among recent law school graduates? 2023 update

In late 2020, I last blogged about the “debt-to-income” ratio of recent law school graduates.

The Department of Education offers data with incredible insights into debt and earnings of university graduates. Recent updates are available, and we can look at the data again. Here’s the data fields from the Department of Education:

Institution-level data files for 1996-97 through 2020-21 containing aggregate data for each institution. Includes information on institutional characteristics, enrollment, student aid, costs, and student outcomes.

Field of study-level data files for the pooled 2014-15, 2015-16 award years through the pooled 2017-18, 2018-19 award years containing data at the credential level and 4-digit CIP code combination for each institution. Includes information on cumulative debt at graduation and earnings one year after graduation.

One intriguing figure is the “debt-to-income” ratio (some people hated this term, but I’m still using it), or how much student debt recent graduates have compared to their annual earnings. Lower is better. (A slightly better way is to calculate what percentage of your monthly paycheck is required to service your monthly debt payment, or the debt-service-to-monthly-income ratio, but this gives a good idea of the relationship between debt and income.) It’s entirely imperfect, of course—graduates have interest accrued on that debt when they graduate; they may have other debt; and so on. It’s just one way of looking at the data!

I took the raw data file and pulled out all domestic schools that had a concentration in “law” for a “doctoral degree” or “first professional degree.” I then compared the median debt load to the median earnings figures. (Of course, there’s no guarantee these figures are the same person, and there may be other mismatches, like high earners with low debt or low earners with high debt. Again, just one way of looking at the data!)

Not all schools are listed due to some data issues—sometimes the Department of Education fails to disclose certain data for some institutions.

The Department of Education site defines these figures as follows:

Field of Study Median Earnings

The median annual earnings of individuals who received federal financial aid during their studies and completed an award at the indicated field of study. To be included in the median earnings calculation, the individuals needed to be working and not be enrolled in school during the year when earnings are measured. Median earnings are measured in the fourth full year after the student completed their award.

These data are based on school-reported information about students’ program of completion. The U.S. Department of Education cannot fully confirm the completeness of these reported data for this school.

For schools with multiple locations, this information is based on all of their locations.

These calculations are based, in part, on calendar year 2020 earnings which may have been impacted by the pandemic and may not be predictive of earnings values in non-pandemic years.

Field of Study Median Total Debt for Loans Taken Out at This School

The median federal loan debt accumulated at the school by student borrowers of federal loans (William D. Ford Federal Direct Loan Program, the Federal Family Education Loan Program, and Graduate PLUS Loans) who completed an award at the indicated field of study. Non-federal loans, Perkins loans, and federal loans not made to students (e.g., parents borrowing from the federal Parent PLUS loan program) are not included in the calculation. Only loans made at the same academic level as the award conferred are included (e.g., undergraduate loans are not included in the median debt calculation for graduate credential levels). Note that this debt metric only includes loans originated at this school, so this metric should be interpreted as the typical debt level for attending this school alone, not necessarily the typical total debt to obtain a credential for students who transfer from another school. For schools with multiple locations, this information is based on all of their locations.

These data are based on school-reported information about students’ program of completion. The U.S. Department of Education cannot fully confirm the completeness of these reported data for this school.

That means debt loads can of course be higher if undergraduate loans were factored in.

A number of elite schools are near the top—despite their high debt levels, they translate into high median incomes among their graduates. A number of lower-cost schools also fare well near the top.

A good rule of thumb might be that “manageable” debt loads are those where debt is about equal to expected income at graduation—i.e., a ratio of 1.00 or lower. Only 20 schools meet that definition among median debt and earnings, and a few others are close. That said, law graduates to have higher earnings and see their salaries rise faster than a typical borrower, so maybe it’s not the best rule of thumb, either.

Many ratios, however, are significantly higher than that. 59 have ratios above 2.00; of those, 13 have ratios above 3.00. Only a couple of schools in the USNWR “top 50” rankings cross the 2.00 ratio.

Many borrowers will be eligible for Public Service Loan Forgiveness programs, either at the federal level or at their own law schools. If schools have disproportionately higher percentages of students entering those programs, their debt levels will appear worse than they actually and their salaries will appear on the lower end of the income side. It’s another limitation in thinking about a single-figure metric.

Of course, medians are likely skewed in other ways—the highest-earning graduates likely received the largest scholarships and, accordingly, graduated with the lowest debt.

But, the figures are below. I sort by the lowest (i.e., best) debt-to-income ratio. (Due to size of chart, results may be best viewed on a desktop or on a phone turned sideways.) I noted a few years ago that schools at the bottom of the list (i.e., with the highest ratio) appeared at a much higher risk of facing “adverse situations.”

School Debt-to-Income Ratio Median Debt Median Income
Harvard Univ. 0.54 $93,235 $172,727
Northwestern Univ. 0.78 $154,286 $196,640
George Mason Univ. 0.81 $65,077 $80,019
Cornell Univ. 0.83 $162,160 $195,233
Univ. of California-Berkeley 0.83 $155,891 $186,967
Univ. of Nebraska-Lincoln 0.84 $54,456 $64,977
Univ. of Pennsylvania 0.87 $171,488 $196,219
The Univ. of Alabama 0.88 $61,500 $70,082
Univ. of Michigan-Ann Arbor 0.88 $132,524 $150,448
Duke Univ. 0.91 $158,000 $173,119
Boston Univ. 0.91 $117,740 $128,883
Univ. of Wisconsin-Madison 0.92 $61,500 $67,155
Fordham Univ. 0.93 $147,561 $158,382
Wayne State Univ. 0.93 $61,466 $65,928
Univ. of Virginia 0.94 $178,812 $189,235
Villanova Univ. 0.95 $69,861 $73,474
Univ. of New Hampshire 0.95 $61,500 $64,654
Vanderbilt Univ. 0.97 $139,857 $144,075
Columbia Univ. in the City of New York 0.99 $198,924 $201,681
Georgetown Univ. 0.99 $162,286 $164,429
Stanford Univ. 1.00 $153,302 $153,149
Washington Univ. in St Louis 1.01 $92,540 $91,359
Texas A & M Univ.-College Station 1.02 $71,446 $70,263
Univ. of Southern California 1.02 $138,518 $135,745
Univ. of Illinois Urbana-Champaign 1.03 $77,159 $75,235
Univ. of Kansas 1.03 $61,500 $59,724
Georgia State Univ. 1.03 $72,563 $70,243
Univ. of Chicago 1.04 $188,691 $181,658
Univ. of North Dakota 1.04 $61,500 $58,885
Univ. of Utah 1.06 $74,012 $69,909
The Univ. of Tennessee-Knoxville 1.06 $61,500 $57,949
Univ. of Houston 1.06 $86,372 $81,124
Boston College 1.07 $123,000 $114,959
Univ. of Florida 1.08 $71,483 $66,008
Northeastern Univ. 1.10 $70,571 $63,909
Univ. of Arkansas 1.13 $65,000 $57,557
Temple Univ. 1.14 $81,733 $71,731
Univ. of Nevada-Las Vegas 1.14 $82,985 $72,511
Univ. of California-Davis 1.16 $92,689 $80,209
Univ. of Cincinnati 1.16 $66,694 $57,672
Univ. of Iowa 1.16 $80,268 $69,147
The Pennsylvania State Univ. 1.17 $65,436 $56,119
Univ. of Oklahoma-Norman 1.17 $69,800 $59,521
Florida State Univ. 1.17 $66,707 $56,790
The Univ. of Montana 1.18 $72,126 $61,101
Univ. of Georgia 1.18 $82,694 $69,896
Univ. of North Carolina at Chapel Hill 1.20 $91,570 $76,259
Rutgers Univ.-New Brunswick 1.23 $71,218 $57,880
CUNY Sch. of Law 1.23 $81,666 $66,167
Indiana Univ.-Bloomington 1.24 $92,000 $74,327
Univ. of Wyoming 1.24 $70,488 $56,842
Ohio State Univ. 1.25 $91,529 $73,515
Univ. of Arkansas at Little Rock 1.25 $61,500 $49,115
Univ. of St Thomas 1.26 $74,968 $59,454
Univ. of Mississippi 1.27 $69,701 $55,037
Drake Univ. 1.28 $83,526 $65,460
Drexel Univ. 1.28 $72,191 $56,367
Ohio Northern Univ. 1.29 $61,500 $47,520
Texas Tech Univ. 1.31 $86,163 $65,990
Cleveland State Univ. 1.31 $71,500 $54,679
Brooklyn Law Sch. 1.32 $96,951 $73,383
Duquesne Univ. 1.35 $72,500 $53,684
Univ. of Connecticut 1.36 $96,386 $70,942
Univ. of Missouri-Columbia 1.36 $73,501 $53,923
Quinnipiac Univ. 1.37 $81,000 $59,034
Yeshiva Univ. 1.38 $101,500 $73,371
Northern Illinois Univ. 1.38 $75,688 $54,663
Univ. of Minnesota-Twin Cities 1.40 $98,423 $70,206
Washington and Lee Univ. 1.41 $97,335 $69,076
Louisiana State Univ. 1.41 $88,622 $62,823
Washburn Univ. 1.41 $77,330 $54,793
Univ. of Kentucky 1.43 $75,150 $52,479
Univ. of Akron Main 1.44 $71,000 $49,471
Univ. of Hawaii at Manoa 1.44 $98,536 $68,638
Univ. of Washington-Seattle 1.44 $108,519 $75,253
Univ. of Toledo 1.44 $76,000 $52,619
Illinois Institute of Technology 1.46 $97,727 $67,070
St. John's Univ.-New York 1.47 $112,017 $76,210
Univ. of Notre Dame 1.47 $128,413 $87,091
Saint Louis Univ. 1.48 $99,458 $67,352
William & Mary 1.50 $105,023 $70,191
Univ. of Maine 1.50 $85,950 $57,401
Indiana Univ.-Purdue Univ.-Indianapolis 1.50 $97,806 $65,061
Albany Law Sch. 1.51 $93,800 $62,238
Loyola Univ. Chicago 1.52 $119,367 $78,406
Univ. of California-Irvine 1.54 $133,605 $86,874
Florida International Univ. 1.55 $90,411 $58,150
Arizona State Univ. Immersion 1.56 $100,564 $64,489
Southern Methodist Univ. 1.57 $145,569 $92,581
Univ. of Richmond 1.58 $100,229 $63,433
Case Western Reserve Univ. 1.59 $98,460 $61,746
Univ. of New Mexico 1.59 $91,267 $57,225
Univ. at Buffalo 1.61 $94,242 $58,697
Univ. of Tulsa 1.61 $90,365 $56,260
Univ. of Colorado Boulder 1.61 $105,696 $65,704
Massachusetts Sch. of Law 1.63 $80,384 $49,371
Univ. of Oregon 1.64 $98,655 $60,241
Univ. of California-Hastings College of Law 1.64 $139,352 $84,760
Seton Hall Univ. 1.65 $115,179 $69,650
Mitchell Hamline Sch. of Law 1.66 $101,761 $61,445
Wake Forest Univ. 1.66 $105,023 $63,235
Yale Univ. 1.67 $140,977 $84,669
Univ. of Memphis 1.69 $92,250 $54,715
West Virginia Univ. 1.71 $93,735 $54,919
Michigan State Univ. 1.71 $103,630 $60,480
Regent Univ. 1.72 $85,898 $49,875
Univ. of Pittsburgh-Pittsburgh 1.74 $109,178 $62,907
Suffolk Univ. 1.75 $113,386 $64,945
Univ. of Idaho 1.76 $100,091 $56,904
Univ. of Missouri-Kansas City 1.78 $97,000 $54,597
Argosy Univ. 1.78 $106,114 $59,569
Emory Univ. 1.79 $134,617 $75,208
Univ. of Louisville 1.80 $96,424 $53,541
Syracuse Univ. 1.80 $113,050 $62,765
Univ. of Maryland, Baltimore 1.84 $118,506 $64,417
Univ. of the District of Columbia 1.84 $110,258 $59,909
Univ. of Baltimore 1.85 $106,102 $57,324
Gonzaga Univ. 1.85 $110,687 $59,741
Valparaiso Univ. 1.87 $89,751 $48,100
Univ. of San Diego 1.87 $145,850 $77,990
Capital Univ. 1.87 $106,377 $56,836
Southern Illinois Univ.-Carbondale 1.88 $91,500 $48,797
Belmont Univ. 1.88 $101,152 $53,747
San Joaquin College of Law 1.94 $109,339 $56,377
Samford Univ. 1.95 $108,958 $55,927
New York Law Sch. 1.99 $142,500 $71,646
Loyola Marymount Univ. 2.04 $155,436 $76,120
DePaul Univ. 2.09 $131,463 $62,841
Pepperdine Univ. 2.10 $154,886 $73,898
Santa Clara Univ. 2.10 $180,127 $85,894
Baylor Univ. 2.11 $172,756 $81,912
Univ. of South Carolina-Columbia 2.12 $115,354 $54,513
Mercer Univ. 2.13 $124,216 $58,393
George Washington Univ. 2.14 $176,325 $82,298
The Catholic Univ. of America 2.18 $147,964 $67,970
South Texas College of Law Houston 2.18 $142,976 $65,593
Western New England Univ. 2.19 $97,835 $44,639
Seattle Univ. 2.20 $144,542 $65,675
Univ. of Miami 2.21 $148,750 $67,424
Univ. of the Pacific 2.22 $147,082 $66,300
Elon Univ. 2.24 $119,023 $53,224
Touro Univ. 2.27 $132,011 $58,113
New England Law-Boston 2.27 $126,248 $55,545
Creighton Univ. 2.28 $128,182 $56,322
Univ. of Detroit Mercy 2.30 $122,626 $53,322
Northern Kentucky Univ. 2.31 $101,097 $43,718
Marquette Univ. 2.37 $137,200 $57,795
Lincoln Memorial Univ. 2.39 $108,228 $45,341
Roger Williams Univ. Sch. of Law 2.39 $122,459 $51,200
North Carolina Central Univ. 2.40 $117,597 $49,032
Widener Univ. 2.46 $131,126 $53,209
St. Mary's Univ. 2.47 $145,002 $58,704
American Univ. 2.47 $161,696 $65,460
Lewis & Clark College 2.49 $149,506 $60,132
Hofstra Univ. 2.54 $163,347 $64,417
Univ. of Massachusetts-Dartmouth 2.55 $123,227 $48,378
Campbell Univ. 2.56 $135,880 $53,113
Univ. of Denver 2.56 $161,053 $62,896
Chapman Univ. 2.58 $170,800 $66,272
Howard Univ. 2.58 $185,348 $71,861
Southern Univ. Law Center 2.58 $118,010 $45,662
Florida Agricultural and Mechanical Univ. 2.59 $115,500 $44,537
Stetson Univ. 2.70 $142,533 $52,813
Univ. of Illinois Chicago 2.71 $153,993 $56,822
Vermont Law and Graduate Sch. 2.75 $139,540 $50,783
Mississippi College 2.83 $143,299 $50,576
Willamette Univ. 2.85 $162,945 $57,152
Oklahoma City Univ. 2.90 $145,281 $50,180
Faulkner Univ. 2.91 $137,560 $47,349
Golden Gate Univ. 2.93 $154,813 $52,909
Ave Maria Sch. of Law 2.94 $144,259 $49,074
Nova Southeastern Univ. 2.95 $162,455 $54,987
Florida Coastal Sch. of Law 3.17 $158,836 $50,102
St. Thomas Univ. 3.18 $166,022 $52,281
California Western Sch. of Law 3.19 $179,866 $56,303
Southwestern Law Sch. 3.50 $203,702 $58,279
Barry Univ. 3.54 $154,477 $43,676
Univ. of San Francisco 3.58 $182,582 $50,987
Charleston Sch. of Law 3.66 $152,981 $41,855
Appalachian Sch. of Law 3.79 $123,970 $32,667
Atlanta's John Marshall Law Sch. 3.96 $193,041 $48,790
Inter American Univ. of Puerto Rico 4.00 $110,693 $27,693
Arizona Summit Law Sch. 4.31 $227,656 $52,864
Western Michigan Univ.-Thomas M. Cooley 4.95 $202,668 $40,967
Pontifical Catholic Univ. of Puerto Rico 8.43 $122,712 $14,563

This table has been updated with information from Maine. Some schools, including BYU and Texas, do not have complete data reported in the Department of Education data set and cannot be included here.

How did Big Law survive the "Death of Big Law"?

The second in an occasional series I call “dire predictions.”

In 2010, Professor Larry Ribstein published a piece called The Death of Big Law in the Wisconsin Law Review. Here are a few of the more dire claims Professor Ribstein made:

  • “Big Law’s problems are long-term, and may have been masked until recently by a strong economy, particularly in finance and real estate. The real problem with Big Law is the non-viability of its particular model of delivering legal services.”

  • “When big firms try to expand without the support structure they are prone to failure. Big Law recently has been subject to many market pressures that have exposed its structural weakness. The result, not surprisingly, is that large law firms are shrinking or dying and smaller firms that do not attempt to mimic the form of Big Law are rising in their place.”

  • “These Big Law efforts to stay big are not, however, sustainable. Hiring more associates makes it harder for firms to provide the training and mentoring necessary to back their reputational bond.”

  • “In a nutshell, these firms need outside capital to survive, but lack a business model for the development of firm-specific property that would enable the firms to attract this capital. These basic problems have left Big Law vulnerable to client demands for cheaper and more sophisticated legal products, competition among various providers of legal services, and national and international regulatory competition. The result is likely to be the end of the major role large law firms have played in the delivery of legal services.”

  • “The death of Big Law has significant implications for legal education, the creation of law and the role of lawyers. First, a major shift in market demand for law graduates ultimately will affect the demand for and price of legal education. Big Law’s inverted pyramid, by which law firms can bill out even entry-level associate time at high hourly rates, has created a high demand and escalating pay for top law students. The pressures on Big Law discussed throughout this Article are ending this era with layoffs, deferrals, pay reductions, and merit-based pay.”

The late Professor Ribstein’s piece is only one such article in a movement of pieces that arose in the 2009-2010 reaction to the financial crisis. But large law firms appear to be thriving and continue to hire associates at ever-increasing clips among new law school graduates. Two charts to consider.

First, the number of law firms with gross total annual revenue exceeding $1 billion has climbed swiftly over the last decade or so. There were just 13 such firms in 2011, but 52 in 2021 (and down to 50 in 2022). True, inflation can account for rising total revenue. But it also reflects large law firms staying large—or becoming larger. (Figures from law.com AmLaw annual reports.)

Second, law student placement in those jobs. For the Class of 2011, nearly 4700 graduates ended up in those positions, just over 10% of the graduating class. Since then, graduating classes have shrunk by several hundred students, which has helped the overall placement rate as a percentage of graduates. But raw placement has nearly doubled in the last decade, too, to over 8500 for the Class of 2022, or nearly 25% of the graduating class.

Of course, one could find ways that “Big Law” is changing, whether that’s through the use of technology, the relationships it has with clients, its profits and salary structure, whatever it may be.

But “Big Law,” despite the dire predictions in the midst of the financial crisis, does not appear anywhere close to dead. To the extent there are large firms aggregating attorneys, with partners sharing significant profits among themselves and hiring a steady stream of associates for large and sophisticated work of large corporate clients, the model does not appear dead, but growing. Perhaps other types of disruption will appear in the future to change this model. But the financial stability of the model appears largely intact.

Overall legal employment for the Class of 2022 improves slightly, with large law firm and public interest placement growing

The aftermath of a pandemic, bar exam challenges, or a softening economy didn’t dampen the employment outcomes for law school graduates in 2022. Outcomes improved a touch. Below are figures for the ABA-disclosed data (excluding Puerto Rico’s three law schools). These are ten-month figures from March 15, 2023 for the Class of 2022.

  Graduates FTLT BPR Placement FTLT JDA
Class of 2012 45,751 25,503 55.7% 4,218
Class of 2013 46,112 25,787 55.9% 4,550
Class of 2014 43,195 25,348 58.7% 4,774
Class of 2015 40,205 23,895 59.4% 4,416
Class of 2016 36,654 22,874 62.4% 3,948
Class of 2017 34,428 23,078 67.0% 3,121
Class of 2018 33,633 23,314 69.3% 3,123
Class of 2019 33,462 24,409 72.9% 2,799
Class of 2020 33,926 24,006 70.8% 2,514
Class of 2021 35,310 26,423 74.8% 3,056
Class of 2022 35,638 27,607 77.5% 2,734

Placement is very good. There was an increase of over 1000 full-time, long-term bar passage-required jobs year-over-year, and the graduating class size was the largest since 2016. It yielded a placement of 77.5%. J.D. advantage jobs decreased somewhat, perhaps consistent with a hot law firm market last year.

It’s remarkable to compare the placement rates from the Class of 2012 to the present, from 56% to 78%. And it’s largely attributable to the decline in class size.

Here’s some comparison of the year-over-year categories.

FTLT Class of 2021 Class of 2022 Net Delta
Solo 234 160 -74 -31.6%
2-10 5,205 5,070 -135 -2.6%
11-25 2,004 2,115 111 5.5%
26-50 1,218 1,360 142 11.7%
51-100 1,003 1,175 172 17.1%
101-205 1,143 1,246 103 9.0%
251-500 1,108 1,145 37 3.3%
501+ 5,740 6,137 397 6.9%
Business/Industry 3,070 2,797 -273 -8.9%
Government 3,492 3,591 99 2.8%
Public Interest 2,573 2,875 302 11.7%
Federal Clerk 1,189 1,130 -59 -5.0%
State Clerk 2,094 2,053 -41 -2.0%
Academia/Education 328 375 47 14.3%

The trend continues last years uptick in public interest placement, which is not an outlier. Public interest job placement is up over 100% since the Class of 2017. These eye-popping number continue to rise. It is likely not an understatement to say that law students are increasingly oriented toward public interest, and that there are ample funding opportunities in public interest work to sustain these graduates. (I include a visualization of the trend of raw placement into these jobs here.)

Sole practitioners continue to slide significantly (they were in the low 300s not long ago in raw placement).

Additionally, extremely large law firm placement continues to boom. Placement is up more than thousands graduates in the last several years. Placement in firms with at least 101 attorneys is around 8500. Nearly 25% of all law school graduates landed in a “Big Law” firm, and more than 30% of those who were employed in a full-time, long-term, bar passage-required job landed in a “Big Law” firm.

Federal clerkship placement has dropped a bit, perhaps because more judges are hiring those with work experience rather than recent graduates, or perhaps because the pool of potential candidates is shrinking as more judges hire students for multiple clerkships.

Some law schools fundamentally misunderstand the USNWR formula, in part because of USNWR's opaque methodology

Earlier this week, USNWR announced it was indefinitely postponing release of its law school rankings, after delaying their release one week. It isn’t the first data fiasco that’s hit USNWR in law rankings. In 2021, it had four independent problems, two disputed methodology and two disputed data, that forced retraction and recalculation.

There are likely obvious problems with the data that USNWR collected. For instance, Paul Caron earlier noted the discrepancies in bar passage data as released by the ABA. I noticed similar problems back in January, but (1) I remedied some of them and (2) left the rest as is, assuming, for my purposes, close was good enough. (It was.) The ABA has a spreadsheet of data that it does not update, and individual PDFs for each law school that it does update—that means any discrepancies that are corrected must later be manually supplemented to the spreadsheet. It is a terrible system. It is exacerbated by the confusing columns that ABA uses to disclose data. But it only affected a small handful of schools. It is possible USNWR has observed this issue and is correcting it. And it is possible this affects a small number of schools.

A greater mistake advocated by law school deans, however, relates to employment data. Administrators and deans at Yale, Harvard, and Berkeley, at the very least, have complained very publicly to Reuters and the New York Times that their employment figures are not accurate.

They are incorrect. It reflects a basic misunderstanding of the USNWR data, but it is admittedly exacerbated by how opaque USNWR is when disclosing its metrics.

In 2014, I highlighted how USNWR publicly shares certain data with prospective law students, but then conceals other data that it actually uses in reaching its overall ranking. This is a curious choice: it shares data it does not deem relevant to the rankings, while concealing other data that is relevant to the rankings.

The obvious one is LSAT score. USNWR will display the 25th-75th percentile range of LSAT scores. But it uses the 50th percentile in its ranking. That could be found elsewhere in its publicly-facing data if one looks carefully. And it is certainly available in the ABA disclosures.

Another less obvious one is bar passage data. USNWR will display the first-time pass rate of the school in the modal jurisdiction, and that jurisdiction’s overall pass rate. But it uses the ratio of first-timers over the overall pass rate, a number it does not show (but simple arithmetic makes easier). And in recent years, it now uses the overall rate from all test-takers across all jurisdictions, which it also does not show. Again, this is certainly available in the ABA disclosures.

Now, on to employment data. As my 2014 post shows, USNWR displays an “employed” statistics, for both at-graduation and 9 or 10 months after graduation. But it has never used that statistic in its rankings formula (EDIT: in recent years—in the pre-recession days, it weighed employment outcomes differnetly). It has, instead, weighed various categories to creates its own “employment rank.” That scaled score is used in the formula. And it has never disclosed how it weighs the other categories.

Let’s go back to what USNWR publicly assured law schools earlier this month (before withdrawing this guidance):

The 2023-2024 Best Law Schools methodology includes:

. . .

Full credit for all full-time, long-term fellowships -- includes those that are school funded -- where bar passage is required or where the JD degree is an advantage

Maximum credit for those enrolled in graduate studies in the ABA employment outcomes grid

Note that the methodology will give “full credit” or “maximum credit” for these positions. That is, its rankings formula will give these positions, as promised to law schools based on their complaints, full weight in its methodology.

I had, and have, no expectation that this would change what it publicly shares with prospective law students about who is “employed.” Again, that’s a different category, not used in the rankings. I assume, for instance, USNWR believes its consumers do not consider enrollment in a graduate program as being “employed,” so it does not include them in this publicly-facing metric.

Now, how can law schools know that this publicly-facing metric is not the one used in the rankings methodology, despite what USNWR has said? A couple of ways.

First, as I pointed out back in January, “I assume before they made a decision to boycott, law schools modeled some potential results from the boycott to determine what effect it may have on the rankings.” So law schools can use their modeling, based on USNWR own public statements, to determine where they would fall. My modeling very closely matches the now-withdrawn rankings. Indeed, Yale was the singled greatest beneficiary of the employment methodology change, as I pointed out back in January. It is very easy to run the modeling with school-funded and graduate positions given “full weight,” or given some discounted weight, and see the difference in results. It is impossible for Yale to be ranked #1 under the old formula—that is, in a world where its many graduates in school-funded or graduate positions did not receive “full weight” in the methodology. Again, very simple, publicly-available information (plus a little effort of reverse-engineering the employment metrics from years past) demonstrates the outcomes.

Second, USNWR will privately share with schools subscribing to its service an “employment rank.” This raw “rank” figure is the output of the various weights it gives to employment metrics. It does not reveal how it get there; but it does reveal where law schools stand.

It takes essentially no effort to see that the relationship between the “employment” percentage and the “employment rank” is pretty different or will look largely the same. And that’s even accounting for the fact that the “rank” can include subtle weights for many different positions. At schools like Yale, there are very few variables. In 2021, it had students in just 10 categories. And given that a whopping 30 of them were in full-time, long-term, law school funded bar passage required positions, and another 7 in graduate programs, the mismatch between “employment” percentage and “employment rank” should be obvious, or the two categories should match pretty cleanly.

Third, one can also reverse engineer the “employment rank” to see how USNWR gives various weight to the various ABA categories. This takes some effort, but, again, it is entirely feasible to see how these jobs are given various weights to yield a “rank” that looks like what USNWR privately shares. And again, for schools that run these figures themselves, they can see if USNWR is actually giving full “weight” to certain positions or not.

USNWR’s opaque approach to “employment rank” certainly contributes to law schools misunderstanding the formula. But law schools—particularly elite ones who initiated the boycott and insisted they do not care about the rankings, only now to care very much about them—should spend more effort understanding the methodology before perpetuating these erroneous claims.

February 2023 MBE bar scores fall to all-time record low in test history

After all-time lows in 2020, matched in 2022, the February 2023 administration of the Multistate Bar Exam has hit new lows. The mean score was a 131.1, down from 132.6 last year and 134.0 the year before. We would expect bar exam passing rates to drop in most jurisdictions. That’s off a recent high in 2011 of 138.6.

Given how small the February pool is in relation to the July pool, it's always hard to draw too many conclusions from the February test-taker pool. The February cohort is historically weaker than the July cohort, in part because it includes so many who failed in July and retook in February. The NCBE reports that 72% were repeaters, which also contributes to a weaker pool. That said, there are ominous signs for the near future, according to the NCBE report: “We saw a decrease in performance across all groups of examinees, and the decrease was the greatest (about two scaled score points) for likely first-time test takers.” The NCBE points to learning loss from the COVID pandemic, and it remains to be seen how much changes in law school education affected things. (But, frankly, I anticipated things could have been much worse last year, so there are a number of open questions, to be sure, about specific pedagogical issues, or specific issues relating to student wellness.)

As interest in law schools appears to be waning, law schools will need to ensure that class quality remains strong and that they find adequate interventions to assist at-risk student populations. Likewise, changes to the USNWR methodology may well increase the importance of the bar exam in the very near future.

Prior clerkship experience of Supreme Court clerks has changed dramatically in the last 10 and 20 years

David Lat’s tireless efforts to chronicle the hiring of Supreme Court clerks prompted me to look at a trend that’s developed in recent years. It increasingly appears that multiple clerkships are a prerequisite to securing a Supreme Court clerkship. So I looked at the data for this October Term 2023 class, along with comparisons to the credentials of the OT2013 and OT2003 classes. The results were pretty dramatic. (I looked only at the 36 clerks of the active justices, and the 35 when Chief Justice Rehnquist served on the Court and only hired three clerks instead of the usual four.)

For OT2003, just twenty years ago, 33 clerks came off of one previous court of appeals clerkship, and just two others had multiple clerkships (one of which was not on the federal court of appeals). In 2013, the number off a single prior court of appeals clerkship had dropped to 25. Another nine had two prior clerkships (one of which was not court of appeals), and two more had newer development of two separate court of appeals clerkships. Today, for October Term 2023, just seven of the 36 clerks came from one prior court of appeals clerkship. Fourteen had two prior clerkships, at least one of which was not on the federal court of appeals. And 11 had two prior court of appeals clerkships, and four with the novel development of three prior clerkships.

I’ve lamented that the hoops to jump through for a law school teaching position often involve a series of short-term stints and moves over a course of a few short years. Likewise, I’m not sure this is a particularly welcome development. Admittedly, Supreme Court clerks are a fraction of career outcomes. But many more, I think, are likewise chasing similar credentials of serial clerkships even if they do not get a Supreme Court clerkship in the end. I am not sure that it redounds to the benefit of law students, who as fourth or fifth year associates have much higher billing rates and expectations, but much less practical experience in the actual practice of law. For judges, I am sure that clerks with experience are beneficial, but in previous eras that role may have been given to a career clerk. I don’t know what the longer-term ramifications are, but it’s a trend I’m watching.

Law schools say they're "boycotting" the USNWR rankings, but their admissions practices suggest otherwise

Earlier, I pointed out that law schools “boycotting” the USNWR law school rankings really meant that they would not be completing the survey forms circulated to them. Some data, including expenditures per student or law student indebtedness, cannot readily be gathered elsewhere. USNWR responded by modifying its rankings criteria to use only publicly-available data.

I also noted that some schools appeared to still be “participating” in other elements of the rankings. Some, for instance, circulated promotional material to prospective USNWR voting faculty about the achievements of their schools and their faculty.

But I wanted to focus on another mismatch between what law schools are saying and what they are doing. And that’s in admissions.

Yale and Harvard, in their opening salvo, lamented the over-emphasize on the median LSAT and UGPA of incoming students. So, the thought might have gone, we are going to consider admissions based on our own criteria, not dictated by USNWR. As I chronicled a decade ago, USNWR significantly distorts the incentives for law school admissions by driving schools to admit students with either an above-target-median LSAT or an above-target-median UGPA. Higher-caliber students who fall just below the cusp are not admitted, as measured by the “index score,” which is most predictive of law school performance. Lower-caliber students who excel on one of these two measures are admitted. (UPDATE: I added a link and clarified the points made here.)

One might expect other boycotting schools to go their own way on admissions. From UCLA:

The rankings’ reliance on unadjusted undergraduate grade point average as a measure of student quality penalizes students who pursue programs with classes that tend to award lower grades (in STEM fields, for example), regardless of these students’ academic ability or leadership potential.

And from Northwestern:

First, by over-valuing median LSAT and UGPA, it incentivizes law schools to provide scholarships to students at their medians and above rather than to students with the greatest need.

You can find similar approaches from schools like Vanderbilt, Fordham, and other schools. But it appears any admissions-related concerns have been a non sequitur. These schools acknowledge that ABA data already provides median statistics on incoming classes.

So would boycotting schools simply ignore the consequences of the USNWR formula and instead admit classes less focused on medians? It appears not.

With almost pinpoint precision, you can see that law schools continue to target a particular median LSAT and UGPA in their admissions statistics. Self-reported LSD.law, which in various iterations has been a go-to source of self-reported law school admissions information for twenty years, reflects that these law schools, so far, continue to push medians.

Each image is a snapshot of where law school admissions stand today. Each green dot is an acceptance. Each is a school purporting to “boycott” the USNWR rankings, which means, in theory, it need not worry about its median LSAT & UGPA. If that were the case, we would expect admissions to decrease from the upper right to the bottom left with some gradations.

Instead, you can see that these schools have four quadrants, strongly disfavoring anyone in the lower left quadrant, i.e., those who are “below” targeted medians.

In other words, these law schools are still admitting students principally on the basis of how it would affect their USNWR ranking.

Now, many caveats. Of course LSD.law (and Law School Numbers before that) is self-reported data and self-selected data. It may not get the schools’ precise medians right as I’ve outlined in red. But it certainly reflects a wide swath of prospective students, including those who were not admitted, both above and below any median. Some students are admitted below the medians, for personal statement reasons, for socioeconomic factors, and for a variety of other reasons. But you can see the overwhelming target of admissions remains centered around targeted medians. These, of course, could change by the time the class is admitted.

But I’ll be watching the schools purporting to “boycott” because USNWR inadequately values the things they purport to value in admissions, and then see if they have changed their approach to admissions. So far, it looks like they haven’t.

How can we measure the influence of President Biden's court of appeals judges?

Recent media reports have been discussing President Joe Biden’s influence on the federal judiciary, including the rapid pace of nominating and ensuring confirmation of federal judges. And it’s been something of a proxy for “influence “ or “impact.” It’s true that more judges participating in argument and voting in panels, particularly judges on the federal courts of appeals, is one way of measuring influence.

But another way to measure influence could be to examine written appellate opinions. And it appears President Biden’s court of appeals judges are publishing opinions (at least, in their names) less frequently than other recent judges.

This is hard to measure comparatively across years, of course. For instance, the workloads of the court can change (consider the decline in cases before the Federal Circuit in recent years, for instance). The number of filled seats for active judges, and the workload of senior judges, can change. Consider, for example, that new appointees to a court that is shorthanded probably have much more work than new appointees to a court that has no vacancies, and a court with many active senior judges may have less of a workload of new appointees than a court without many such judges. The practices on each circuit vary wildly in terms of how often decisions are published per curiam or with summary orders rather than in the name of a judge. Getting up to speed if one was confirmed in the middle of a pandemic (say, summer of 2021) may have looked different than previous eras. In short, there are myriad reasons for differences.

Regardless of the reason, there may still be changes in output. I dug into the Westlaw database to try to collect some information and make some comparisons. Using the “JU( )” field (and later, the “DIS( )” and “CON( )” fields joined with the “PA( )” field), I looked at the 10 judges President Biden had confirmed in the first year of his presidency (really, calendar year 2021). (I excluded now-Justice Ketanji Brown Jackson, who was elevated to the Supreme Court in the middle of this window.) All judges were confirmed 14 to 20 months ago. I tried to exclude judges sitting by designation, names shared with others judges, Westlaw’s odd way of handling en banc, and so on, with a quick perusal of results and adjustment to totals.

These 10 Biden-appointed court of appeals judges from 2021 have combined for around 140 majority, named-author opinions (regardless of whether these opinions were “precedential” or "non-precedential”) through mid-February 2023. That’s around 14 per judge. (These 10 judges have also combined for around 31 concurring or dissenting opinions.)

I then went to President Donald Trump’s nominees. They had some similarities: there were 12 court of appeals nominees in 2017, confirmed between 14 and 21 months before February 16, 2019. These 12 judges combined for around 415 majority, named-author opinions. That’s around 34 per judge. (These 12 judges also combined for around 60 concurring or dissenting opinions.)

President Barack Obama had only three federal appellate judges confirmed in his first year. They combined for around 80 majority opinions by mid-February 2011.

As I mentioned, these are rough figures, likely off by a few in one direction or another, as the Westlaw fields are imprecise and I had to cull some data on my own with quick checks. There are probably other ways of looking at the data, including the number of arguments held, the length of time from argument to an issued opinion on a case by case basis, and so on. It’s also a very short window so far, and it’s possible that once the years stretch one we’ll see some smoothing out of the trends. But so far, Biden’s court of appeals appointees have been publishing fewer majority opinions in their names. That’s not to say their influence may not be felt elsewhere, particularly in shaping opinions authored by other judges, in per curiam or unsigned opinions, and so on. It also is not a measure of the influence of any particular opinion, as not all opinions are the same, and some have more impact than others. As I mentioned, the reason has many complexities one could consider. But on this one dimension of frequency, however, so far, there’s been a different pace.

Modeling and projecting USNWR law school rankings under new methodologies

I mused earlier about the “endgame” for law schools “boycotting” the rankings. It’s apparent now that USNWR will not abandon the rankings, and it’s quite unclear (and I would guess doubtful) that these rankings with different metrics will be less influential to prospective law students or employers than the past. But the methodology will change. What might that mean for law schools?

I assume before they made a decision to boycott, law schools modeled some potential results from the boycott to determine what effect it may have on the rankings. We have greater clarity now than those schools did before the boycott, and we can model a little bit better some of the potential effects we’ll see in a couple of months. I developed some models to look at (and brace for) the potential upcoming landscape.

I’ve talked with plenty of people at other schools who are privately developing their own models. That’s great for them, but I wanted to give something public facing, and to plant a marker to see how right—or more likely, how wrong!—I am come spring. (And believe me, if I’m wrong, I’ll write about it!)

First, the criteria. USNWR disclosed that it would no longer use privately-collected data and instead rely exclusively on publicly-available data, with the exception of its reputational survey data. (You can see what Dean Paul Caron has aggregated on the topic for more.) It’s not clear whether USNWR will rely on public data other than the ABA data. It’s also not clear whether it will introduce new metrics. It’s given some indications that it will reduce the weight of the reputational survey data, and it will increase the weight of output metrics.

The next step is to “model” those results. (Model is really just a fancy word for educated guess.) I thought about several different ways of doing it before settling on these five (without knowing what the results of the models would entail).

  Current weight Model A Model B Model C Model D Model E
Peer score 0.25 0.225 0.225 0.225 0.2 0.15
Lawyer/judge score 0.15 0.125 0.125 0.125 0.1 0.1
UGPA 0.0875 0.09 0.09 0.09 0.1 0.1
LSAT 0.1125 0.12 0.13 0.16 0.17 0.15
Acceptance rate 0.01 0.02 0.03 0.02 0.03 0.03
First-time bar passage 0.03 0.05 0.1 0.07 0.05 0.12
10 month employment rate 0.14 0.3 0.25 0.23 0.25 0.3
Student/faculty ratio 0.02 0.04 0.04 0.03 0.05 0.05
Librarian ratio* 0.01 0.01 0.01 0 0 0
Ultimate bar passage 0 0.02 0 0.05 0.05 0
Other factors 0.19 0 0 0 0 0

Note that at least 19% of the old methodology is being cut out of the new methodology. Note, too, that there’s some diminished weight to, at least, the peer score and the lawyer/judge score. That means these categories have to be made up somewhere else. There’s only so much pie, and a lot of pieces simply have to get bigger. Despite the purported focus on outputs, I think some increased weight on inputs will be inevitable (absent significant new criteria being added).

I added a potential category of “ultimate bar passage” rate in three of the five models. It’s a possible output that USNWR may adopt, as it is based on publicly-available information and uses outputs, something USNWR has said it intends to rely upon more heavily.

I also added a “librarian ratio” in two of the five models. But it’s a different one from the existing librarian ratio. USNWR has indicated it will not use its internal library resources question (which was a kind of proprietary calculation of library resources), but it has not indicated that it would not use a student-faculty ratio equivalent for full time and part time librarians, so I created that factor in two of the five models.

If I had to guess, I would guess more minimal adjustments are most likely, highlighted more by Models A & B, but I think there is certainly the possibility for more significant changes, as I highlighted in Models C, D, & E.

Crucially, in all of them, the 10-month employment metric is significantly increased in all models, an assumption that may be wrong, but one that also increases some “responsiveness” (read: volatility) in the rankings, as highlighted below. (I also had to reverse-engineer the weights for the employment metric, which may be in error, and which could change beyond the changes USNWR has presently indicated.) This is one of the most uncertain categories (and the most likely I erred in these predictions), particularly given how much weight it receives in almost any new model. It is also likely going to be the most significant thing law schools can do to move their rankings year to year. If you are wondering how or why a school moved significantly, it is likely attributable to this factor. Getting every graduate into some kind of employment is crucial for success.

I used last year’s peer and lawyer/judge scores, given how similar they tend to be over the years, but with one wrinkle. On the peer scores, I reduced any publicly “boycotting” schools’ peer score by 0.1. I assume that the refusal to submit peer reputational surveys from the home institution (or, perhaps, the refusal of USNWR to count those surveys) puts the school at a mild disadvantage on this metric. I do not know that it means 0.1 less for every school (and there are other variables every year, of course). I just made it an assumption for the models (which of course may well be wrong!). Last year, 69% of survey recipients responded, so among ~500 respondents, the loss of around 1% of respondents, even if quite favorable to the responding institution, would typically not alter the survey average. But as more respondents remove themselves (at least 14% have suggested publicly they will, with others perhaps privately doing so), each respondent’s importance increases. It’s not clear how USNWR will handle the reduced response rate. This adds just enough volatility, in my judgment, to justify the small downgrade.

Next, I ran all the data from the schools, scaled them, weighed them, and ranked them. These are five different models, and they led to five different sets of rankings (unsurprisingly). I then chose the median ranking of each school among the five models. (So the median for any one school could be from any one of the models.)

Let me add one note. From these five different models, there was very little variance between them, despite some significant differences in the weighting. Why is that? Well, many of these items are highly correlated with one another, so adjusting the weights actually affects relatively little. The lower you go, however, the more compressed the rankings are, and the more volatile even small changes can be.

You’ll also note little change for most schools, perhaps no more than a typical year’s ups and downs. Unless the factors removed or given reduced weight worked as a group to favor or disfavor a school, we aren’t likely to see much change.

One last step was to offer a potential high-low range among the rankings. For each of the five models, I gave each school a rank one step up and one step down, to suggest some degree of uncertainty in how USNWR calculates, for instance, the 10-month employment positions, or diploma privilege admission for bar passage, among other things. That gave me 15 potential rankings—a low, projected, and high among each of the five models. I took the lowest of the low and the highest of the high for a projected range. With high degrees of uncertainty, this range is an important caveat.

Below are my projections. (Again, it’s apparent a lot of schools are doing this privately, so I’ll just do one publicly for all to see, share, and, of course, critique. I accidentally switched a couple of schools in a recent preview of this ranking, so I’ve tried to double check as often as I can to ensure the columns are accurate!) Schools should not be overly worried or joyful at these rankings (or any rankings), but they should manage expectations about how they might handle any changes with appropriate stakeholders in the months ahead. Because these are projected “median” rankings, it will not cleanly add up in a true rank order relative to one another (e.g., two schools are projected at 10, and one school is projected at 11).


School Median projected rank Projected range Current rank
Yale 1 1 3 1
Stanford 1 1 3 2
Chicago 3 1 4 3
Harvard 4 3 6 4
Penn 5 3 8 6
Columbia 6 4 8 4
NYU 6 4 8 7
Virginia 8 5 9 8
Berkeley 9 8 12 9
Duke 10 8 13 11
Northwestern 10 8 13 13
Michigan 11 9 14 10
Cornell 13 10 14 12
UCLA 14 12 15 15
Georgetown 15 14 17 14
Vanderbilt 16 14 18 17
Washington Univ. 17 15 19 16
Texas 18 16 20 17
USC 19 15 20 20
Minnesota 20 19 23 21
Florida 21 19 23 21
Boston Univ. 22 20 25 17
Georgia 22 19 25 29
North Carolina 24 21 27 23
Notre Dame 24 21 27 25
BYU 26 23 31 23
Emory 26 22 33 30
Ohio State 27 24 30 30
George Washington 28 25 34 25
Wake Forest 29 24 33 37
Arizona State 30 26 33 30
Boston College 31 26 34 37
Fordham 33 30 37 37
Irvine 33 30 38 37
Alabama 34 30 37 25
Iowa 36 33 42 28
George Mason 36 30 42 30
Texas A&M 37 30 42 46
Illinois 38 33 42 35
Washington & Lee 39 34 43 35
Utah 39 34 44 37
Wisconsin 42 37 51 43
William & Mary 43 39 47 30
Pepperdine 43 41 49 52
Villanova 43 38 47 56
Indiana-Bloomington 46 41 51 43
Florida State 46 42 53 47
Davis 48 44 58 37
Arizona 48 46 55 45
Maryland 48 44 53 47
Washington 48 44 53 49
SMU 48 43 53 58
Baylor 53 46 56 58
Kansas 53 46 58 67
Colorado 55 47 61 49
Cardozo 55 48 61 52
Temple 56 53 58 63
UCSF (Hastings) 58 56 72 51
Richmond 58 55 63 52
Wayne State 58 51 64 58
Tulane 60 56 69 55
Tennessee 61 54 66 56
Oklahoma 61 58 67 88
Loyola-Los Angeles 62 58 69 67
Houston 64 60 69 58
Miami 66 61 69 73
South Carolina 66 61 72 84
San Diego 68 64 74 64
Northeastern 68 61 74 73
Seton Hall 68 61 77 73
Connecticut 69 66 78 64
Florida International 69 61 80 98
Missouri 73 67 78 67
Drexel 73 62 83 78
Georgia State 73 66 80 78
St. John's 73 68 83 84
Oregon 76 68 83 67
Case Western 76 68 85 78
Penn State Law 77 72 85 64
Kentucky 77 68 85 67
American 77 72 87 73
Denver 80 72 85 78
Marquette 82 72 85 105
Texas Tech 84 74 92 105
Cincinnati 85 81 92 88
Lewis & Clark 85 81 92 88
Penn State-Dickinson 86 83 92 58
UNLV 86 83 93 67
Loyola-Chicago 86 83 92 73
Pitt 86 83 92 78
Stetson 86 78 93 111
Chicago-Kent 92 85 93 94
Nebraska 93 91 99 78
Rutgers 94 91 102 86
Drake 94 91 99 111
St. Louis 95 93 99 98
St. Thomas (Minnesota) 96 92 105 127
West Virginia 97 93 108 118
Michigan State 98 94 106 91
Louisville 99 94 104 94

As I’ve gone through “winners” and “losers” in previous posts on individual metrics, not all of those shake out in the final rankings, which include far more than just the isolated categories I looked at earlier. But some obvious winners emerge: Georgia, Texas A&M, and Villanova each see significant improvement, regardless of which version of the methodology is used.

You may well ask, “Why is X over Y?” or “How can A be ranked at B?” The answer is, I gave you the model weights and my assumptions, and this is the result it puts out. It’s all publicly available data, and, again, many schools are privately doing this already and know their areas of strengths and weaknesses as to other law schools.

At the end of the day, we’ll see how wrong I am.

But I think it’s also at least some sign that the shakeup, for most schools, may not be nearly as dramatic as one may suppose.

UPDATE 1/18/2023: I had originally thought I made an error with the calculations I used on the bar passage rates, but some schools have created some inconsistencies that I had to back and check. I was right the first time!