Overall legal employment for the Class of 2020 declines slightly, with large law firm and public interest placement growing

Pandemic, lockdowns, delayed bar exams—there were many challenges facing the Class of 2020, whose graduations were moved online and whose job opportunities became all the more perilous. The trends were slightly negative, which is maybe impressive given those challenges. Below are figures for the ABA-disclosed data (excluding Puerto Rico’s three law schools). These are ten-month figures from March 15, 2021 for the Class of 2020.

  Graduates FTLT BPR Placement FTLT JDA
Class of 2012 45,751 25,503 55.7% 4,218
Class of 2013 46,112 25,787 55.9% 4,550
Class of 2014 43,195 25,348 58.7% 4,774
Class of 2015 40,205 23,895 59.4% 4,416
Class of 2016 36,654 22,874 62.4% 3,948
Class of 2017 34,428 23,078 67.0% 3,121
Class of 2018 33,633 23,314 69.3% 3,123
Class of 2019 33,462 24,409 72.9% 2,799
Class of 2020 33,926 24,006 70.8% 2,514

The placement is still quite good. There was a decline in just 400 bar passage-required jobs year-over year, and the graduating class size increased for the first time in several years. Those yielded a drop to 70.8%—still better than the Class of 2018. But there continues a notable decline in J.D. advantage jobs, which have dropped nearly in half in 6 years, to 2514.

We can see some of the year-over-year categories, too.

FTLT Class of 2019 Class of 2020 Net Delta
Solo 236 260 24 10.2%
2-10 4,761 4,948 187 3.9%
11-25 1,769 1,755 -14 -0.8%
26-50 1,075 1,010 -65 -6.0%
51-100 864 856 -8 -0.9%
101-205 1,059 1,001 -58 -5.5%
251-500 1,044 1,030 -14 -1.3%
501+ 4,976 5,073 97 1.9%
Business/Industry 2,801 2,546 -255 -9.1%
Government 3,656 3,189 -467 -12.8%
Public Interest 2,146 2,284 138 6.4%
Federal Clerk 1,197 1,126 -71 -5.9%
State Clerk 2,135 1,938 -197 -9.2%
Academia/Education 296 269 -27 -9.1%

Last year’s sharp uptick in public interest placement was not an outlier. Public interest job placement is up over 50% in two years. Last year’s eye-popping number rose further. It is likely not an understatement to say that law students are increasingly oriented toward public interest, and that there are ample funding opportunities in public interest work to sustain these graduates.

Additionally, extremely large law firm placement continues to boom. Placement is up more than 1000 graduates in the last several years, breaking 5000.

Despite a bevy of new federal judges confirmed to the bench, federal clerkship placement slid, a suggestion, perhaps, that federal judges continue to look toward clerks with experience. The drop in state clerkship and government clerkship placements might be related to the pandemic, but we’ll see if those rebound next year.

Diploma privilege, July bar exam administration, and law school employment outcomes

We saw a lot of variance in how the bar exam was administered in 2020. That assuredly affected employment outcomes for the Class of 2020, specifically the 10-month employment figures publicly released this week. (The underlying data is here.) Employment figures dropped. But the drop was not evenly distributed. And I think we can learn some things from decisions to introduce a version of “diploma privilege,” and decisions to maintain a July administration of the bar exam—I think. Maybe.

Most graduates of most law schools take the bar in the state where the law school is. There are obviously huge outliers (Yale, among others). And most of the “emergency” diploma privilege jurisdictions—Louisiana, Oregon, Utah, and Washington—particularly favored in-state law schools. (Please note, I exclude the District of Columbia, because its “diploma privilege” is really a lengthy supervised practice requirement.) It’s only four jurisdictions with only about 5.1% of all law school graduates. Yes, some of them take other bar exams, and law school graduates in other states may take advantage of diploma privilege here. But we could compare this cohort to graduates of law schools from other states—the 46 others (there are no law schools in Alaska, and I excluded the law schools in Puerto Rico). (Numbers may not evenly match due to rounding.)

  2020 BPR 2019 BPR Delta 2020 JDA 2019 JDA Delta
Emergency diploma privilege (4) 69.9% 68.2% 1.7 7.6% 9.5% -2.0
Others (46) 70.8% 73.2% -2.4 7.4% 8.3% -0.9

The raw numbers are below:

  2020 BPR 2020 JDA 2020 Grads 2019 BPR 2019 JDA 2019 Grad
Emergency diploma privilege 1,221 132 1,747 1,143 160 1,677
Others 22,785 2,382 32,179 23,244 2,631 31,745

Bar passage-required (“BPR”) job outcomes (“full-time, long-term,” for this and all other categories) rose 1.7 points between 2019 and 2020 among graduates from schools in these four states, compared to a 2.4-point decline in the rest of the country. The correlation is of limited value, of course, but I think it’s a good place to start considering the effect the bar exam might have on legal employment.

A related component is to compare J.D.-advantage (or “JDA”) positions—i.e., positions for which passing the bar is not a prerequisite. In the four “diploma privilege” jurisdictions, J.D.-advantage position placement declined 2 points, whereas in the rest of the country it declined 0.9 points. (Yes, those JDA figures are small….)

This inverse relationship between BPR and JDA suggests, I think, that diploma privilege did not inherently improve overall job placement at a school; instead, it may have shifted graduates from less desirable positions into more desirable ones—or, more importantly, shifted graduates from non-practice positions into the practice of law. To that end, diploma privilege does exactly what it’s designed to do (if this correlation is sufficient to suggest diploma privilege is doing something…).

But not all four of these states saw equal changes to job placement. Louisiana saw a 0.9-point decline in bar passage-required job placement, while Oregon saw it rise 0.5, Utah rise 7.9, and Washington rise 5.3. Is there something about the West that saw a better job market than the East? Hold that thought while I address another….

States had significantly different timing for their bar exam administration. About 1/4 of graduates came from 22 states that offered a July administration of the bar exam compared to states that did not. (Not all graduates, of course, took the July test—many may have deferred, or taken a later offering.) Some postponed a week to August, like Indiana; others canceled it entirely, like Delaware. Some had it in July just like normal, others offered a July exam and an additional fall exam. Did jurisdictions that had a July bar exam look any different in employment outcomes? I pulled out the four emergency diploma privilege jurisdictions (but I did keep Wisconsin).

  2020 BPR 2019 BPR Delta 2020 JDA 2019 JDA Delta
July 2020 bar offered (22) 72.8% 73.6% -0.8 7.3% 8.5% -1.2
No July 2020 bar offered (24) 70.1% 73.1% -3.0 7.4% 8.2% -0.8

And the raw numbers are below.

  2020 BPR 2020 JDA 2020 Grads 2019 BPR 2019 JDA 2019 Grad
July 2020 bar offered 6,015 604 8,265 6,035 698 8,198
No July 2020 bar offered 16,770 1,778 23,914 17,209 1,933 23,547

These results show that employment looked better in states with a July bar exam. Placement in bar passage-required jobs declined from 73.6% to 72.8%, a 0.8-point drop in these states. Placement in other states, however, declined from 73.1% to 70.1%, nearly a 3-point drop. And we see the opposite trends in J.D.-advantage jobs again, too—J.D. advantage jobs declined 1.2 points in July bar exam states, but only 0.8 points in other states.

There’s a geographic divide, however, between Eastern and Western states. Western states (16 in total, about 1/4 of all grads) saw a decline of just 0.9 points in bar passage-required placement, while Eastern states (34, 3/4) saw it fall 2.7 points.

Break those down further into states with a July bar exam or without, and the divide becomes starker still—exacerbated, of course, by New York, which did not have a July bar exam, has the largest potential employment market, and was hit hardest by the pandemic.

  2020 BPR 2019 BPR Delta 2020 JDA 2019 JDA Delta
West, July 2020 bar offered (12) 74.0% 73.0% 1.0 7.1% 8.4% -1.2
East, July 2020 bar offered (12) 72.3% 73.4% -1.1 7.4% 8.7% -1.3
West no July 2020 bar offered (6) 68.2% 70.0% -1.8 6.6% 8.3% -1.7
East, no July 2020 bar offered (32) 70.6% 73.8% -3.2 7.7% 8.3% -0.5

Once more, the raw numbers:

  2020 BPR 2020 JDA 2020 Grads 2019 BPR 2019 JDA 2019 Grad
West, July 2020 bar offered 2,022 195 2,734 2,044 234 2,800
East, July 2020 bar offered 4,613 470 6,384 4,550 537 6,201
West no July 2020 bar offered 4,144 401 6,072 4,278 508 6,110
East, no July 2020 bar offered 13,227 1,448 18,736 13,515 1,512 18,311

It’s only a small cache of data that only reflects a sliver of some of the variables at play in the employment outcomes. But it does appear that diploma privilege or sticking with a July bar exam administration had a positive effect on employment. Of course, we can run back around the correlation-causation fights: were those jurisdictions with July bar exams least affected by the pandemic, in which case legal employment was able to hire more, as opposed to anything about the timing of the bar exams. But it’s some data worth considering when examining the costs of changes to the bar exam.

For the second year in a row, Alabama's admissions standards (partially) trump Yale's

For the second year in a row, Alabama has reported that its incoming 1L class has a 75th percentile undergraduate GPA of (exactly) 4.0. That means at least 25% of its incoming class has a 4.0 GPA or higher. That trumps all schools, including Yale, which this year at a 75th percentile UGPA of 3.99.

And Alabama also reports this year a 50th percentile undergraduate GPA of 3.94. That’s tops in the nation, tied with Yale. (Harvard’s is 3.88, for comparison.)

Above-4.0 GPAs are not uncommon, because LSAC calculates an A+ as a 4.33. It all depends on one’s undergraduate program and the frequency of A+s, I suppose.

Alabama enrolled 127 1Ls last year, so that’s about 32 students with a 4.0 or higher, and about 64 with a 3.94 or higher. The total, Alabama reports, includes 11 students without an LSAT score—that cohort instead relies on UGPA and ACT scores (as some schools have done in the past consistent with ABA regulations), and that cohort of 11 had a mean UGPA of 4.04. The threshold for participation in that program is a 3.90 UGPA (and it’s extending a “streamlined” program with a 3.90 UGPA requirement to a number of Alabama schools this fall).

"The Diamonds Hidden in H.R. 1's Massive Mine"

Over at RealClearPolitics, I have this piece, "The Diamonds Hidden in H.R. 1’s Massive Mine.” It begins:

At a whopping 886 pages, H.R. 1, the For the People Act of 2021, has stirred plenty of controversy. It passed the House along almost perfectly partisan lines: 220 Democrats supported it; 209 Republicans and one Democrat opposed it. The Senate is considering a similar bill.

But within those 886 pages are at least a few provisions that can generate some consensus. Most are rolled over from previous failed bills in Congress, and if they were standalone measures, perhaps they could garner supermajority bipartisan support.

New USNWR metric favors $0 loans over $1 loans for graduating law students

On the heels of the new USNWR law school rankings, with its many pre-release errors, we also have a new metric: student debt. It’s actually two components: 3% weight to average law school debt incurred upon graduation among those incurring debt; and 2% weight to the percentage of students who took out loans.

The only problem? It artificially boosts schools with relatively high debt loads and relatively high percentages of students who do not take out loans. Call it the “independently wealthy law student enrollment” bonus. (Of course, it could also be disproportionate full tuition plus living expense scholarships, but that seems less likely.)

When I started tracking law school affordability years ago, I considered students who took out no debt as $0, weighted in the average of law school debt. I included the caveats as I’ve continued to do. But it doesn’t make much sense to treat these two cohorts differently. Some take out loans; others don’t. The average is the average. Include some caveats and weigh all the students together.

USNWR thinks otherwise.

It separately ranks average indebtedness and students who take out loans. This results in a skewing of results against schools with relatively low debt but a relatively high percentage of students who take out loans; and in favor of schools with relatively high debt but a relatively low percentage of students who take out loans. The latter is the independently wealthy law student enrollment bonus.

This is exacerbated by the tremendous compression for law schools around the percentage of students incurring debt. The difference between 74% and 75%—really, a rounding error, is 0.044 “scaled and weighted” points. That’s the same as the difference an extra $2300 in average tuition indebtedness, which converts to about 0.044 “scaled and weighted” points. Even though the average is 3% and the percentage incurring debt is 2%, the disparity is stark.

UPDATE: I should add, if one were to weight them at 2% and 3%, the figures would be much smaller, more like 0.003. I was doing a 60%-40% allocation using the larger figures to compare these figures to one another, not as components of the overall rankings. But the relative figures still hold, whichever “scaled and weighted” points one wants to use.

How stark? Assume a graduating class with 150 students. Let’s assume, for the moment, they have the average debt of $108,000 (about the median) and 74% indebtedness (about the median).

To move from 75% indebted to 74% indebted is 2 students. Pick off the two students with the least debt and tell them they qualify for some student loan forgiveness program—really, the two students least in need of it. That could bring you down 0.044 points.

In contrast, among the remaining 111 students indebted, they bear $11,988,000 in cumulative debt—nearly $12 million even. To bring the average debt loan down $2300—an equivalent amount of scaled and weighted points—you’d need to trim a whopping $255,300 from the cumulative debt load to bring the average down. (Admittedly, a school can only “game” this so much before the average indebtedness begins to rise substantially among the remaining students.)

It’s a poor metric, in my judgment, when the tradeoffs are placed against each other like that. I ran some figures to examine how the methodology would play out if we had a 5% weight for the single metric of average indebtedness, factoring in students who did not incur any debt among the overall debt figures. I then compared that metric to a 3%-2% weight separating the schools. (The raw figures are here.)

Here’s my rough calculation of the ten schools that benefited most, and the ten schools that benefited least, from the decision to separate the columns rather than to incorporate together. Remember, these aren’t sorted by lowest-to-highest debt loads. Instead, they’re sorted by those who benefited the most from the decision to separate the two metrics rather than put them together.

USNWR debt metric winners (over metric that averages all students together)

Southwestern Law School $190,184 - 82% ($155,950)

California Western School of Law $164,918 - 88% ($145,127)

St. Thomas University $161,701 - 86% ($139,062)

Nova Southeastern University (Broad) $155,193 - 87% ($135,017)

Columbia University $190,141 - 66% ($125,493)

Golden Gate University $151,854 - 83% ($126,038)

American University (Washington) $159,723 - 76% ($121,389)

Harvard University $170,866 - 70% ($119,606)

University of San Francisco $156,460 - 77% ($120,474)

Florida Coastal School of Law $145,245 - 86% ($124,911)

While some schools have percentage incurring debt above the median of 74%, even 85% indebtedness, when weighted, only presses schools so high because of compression. When factoring into the overall debt figures, however, it keeps these schools will above the overall median.

USNWR debt metric losers (over metric that averages all students together)

University of South Dakota $53,253 - 80% ($42,602)

Cleveland State University (Cleveland-Marshall) $69,727 - 90% ($62,754)

Florida A&M University $61,500 - 81% ($49,815)

Ohio Northern University (Pettit) $71,134 - 88% ($62,598)

University of Nebraska--Lincoln $63,027 - 78% ($49,161)

Rutgers University $62,210 - 75% ($46,658)

University of North Dakota $67,281 - 78% ($52,479)

University of Arkansas--Fayetteville $68,877 - 79% ($54,413)

Texas Tech University $56,898 - 72% ($40,967)

University of Utah (Quinney) $76,344 - 85% ($64,892)

These schools—unsurprisingly, public schools with low tuition and low indebtedness—suffered in the rankings because a 90% indebtedness looks absolutely terrible compared to the 74% median; in contrast, it would seem that these students are, on the whole, dramatically better off than some of the ones that benefited from a separate ranking.

One notable outlier is the University of Tulsa, with around $90,000 indebtedness, well below the median, but reporting 100% indebtedness, putting it at the very top and giving it an extremely poor score on this metric. But when balanced against the actual median of about $80,000, it performs only slightly above the median and would benefit significantly.

It’s worth noting that some schools that most benefited had above-median (against, about 74%) percentages, and some schools that benefited least had below-median percentages. But the size of the debt was spread far more significantly from top to bottom, which skewed the overall results.

In short, if schools want to manipulate this poor metric from USNWR, the solution is to buy off the least debt-ridden students before graduation to reduce their loans to $0—or admit far more independently wealthy students, or provide more full-tuition scholarships at the expense of partial tuition scholarships. Those, in my judgment, are the wrong incentives.

Indebtedness metrics and USNWR rankings

Years ago, as I began to look at law student indebtedness metrics, I noted a number of reasons why debt figures should be construed with caveats.

Students with low debt loads could be independently wealthy or come from a wealthy family willing to finance the education. They could have substantial scholarship assistance. They could earn income during school or during the summers. They could live in a low cost-of-living area, or live frugally. They could have lower debt loads because of some combination of these and other factors.

Scholarship awards have, in recent years, appeared to be outpacing tuition hikes—which has been a several-year trend and places schools in increasingly precarious financial positions. Students are no longer purchasing health care due to the ability to remain on their parents' health insurance under federal law, a significant cost for students a few years ago. Schools have increasingly eased, or abolished, stipulations on scholarships, which means students graduate with less debt. Some schools have slashed tuition prices. We might simply be experiencing the decline of economically poorer law students, resulting in more students who need smaller student loans—or none at all. Students may be taking advantage of accelerated programs that allow them to graduate faster with less debt (but there are few such programs). As JD class sizes shrink, it's increasingly apparent that students who would have paid the "sticker" price as increasingly pursuing options at institutions that offer them tuition discounts.

These debt figures are only an average; they do not include undergraduate debt, credit card debt, or interest accrued on law school loans while in school. The average may be artificially high if a few students took out extremely high debt loads that distorted the average, or artificially low if a few students took out nominal debt loads that distorted the average.

Some borrowers will be eligible for Public Service Loan Forgiveness programs. That might make their debt loans appear high when they will ultimately have those loans paid off. And some borrowers will take out a high amount of debt only to earn very high salaries upon graduation.

In short? There are a lot of limitations to debt metrics.

Of course, I think, on the whole, a lower percentage of students taking out debt, the better; and a smaller debt load is better. Those are obvious points. But as USNWR includes two new indebtedness metrics worth 5% of the overall rankings, it’s worth considering the justification.

One reason is to say that “many new lawyers are postponing major life decisions like marriage, having children and buying houses – or rejecting them outright – because they are carrying heavy student loan debts.” True enough. But it’s also a reason, I think, to look at the debt to income ratios of graduates. That is, not all debts look the same—they are much less onerous when income levels are higher. Even this is a limited look, as cost of living matters dramatically, too. But there are alternative concerns, too—students may choose jobs they don’t really want or stick in loan forgiveness programs they don’t want to pay down debt. All important caveats.

Another is a racial disparity concern: “J.D. graduate debt is impacting Black and Hispanic students the most since they borrow more, according to the ABA.” That’s understating it: White students had average indebtedness of about $101,000 in 2018, compared to $150,000 for Hispanic students and $199,000 for Black students.

But more the point, why does USNWR think this metric will improve these disparities? The incentives for law schools are currently to maximize the median LSAT and UGPA scores, which focuses on “bounty” scholarships for high-performing LSAT scorers, which tend to be, at a given institution, white students. One way to both reduce debt levels and improve admissions metrics—both now valued by USNWR—is to increase these disparities. Independently wealthy students are also more attractive to a law school, those who take out zero dollars in loans—and, again, socioeconomic status interacts with racial characteristics, which is likely to increase these disparities. It’s strange, then, to cite race as a basis for including a metric, but to include the metric in such a way as to offer opportunities to exacerbate the very concerns raised.

Furthermore, USNWR already incentivizes schools for being expensive (and fails to disclose that data publicly). This means USNWR incentivizes a very expensive school with low reliance on tuition—pressing schools toward reliance on grants, endowments, or central administration support.

And Goodhart’s Law may come to debt metrics. Again, with my caveats above, I think fewer students with debt and smaller debt loads are, of course, on the whole, a good thing. It might be that schools really start to focus on financial health of students and provide greater counseling to students in law school. It might be that schools will take seriously these figures and do their best to reduce them for all students, not simply in ways that manipulate metrics at the margins. But with any newly-introduced metric, it’s not clear how it will play out.

I’ll have another post on the indebtedness metrics and how they are skewed to favor schools based on percentage of students who incurred debt instead of average debt—and why that’s the wrong approach.

The USNWR law school rankings are deeply wounded--will law schools have the coordination to finish them off?

While law schools love to hate the USNWR law school rankings, they have mostly settled for complaining loudly and publicly, but internally (and sometimes externally) promoting those same rankings or working like made to use them as a basis for recruitment. Collective action is a real problem. Furthermore, finding an effective tool to diminish the value of USNWR law school rankings remains elusive.

But this is, perhaps, the moment for law schools seeking to finish off the USNWR rankings. In the last month, USNWR has had four separate methodological alternations between the preliminary release of rankings and the final release:

  • It created a new “diversity” ranking of schools that did not include Asian-American law students as a component of a “diverse” law school. After law school protest after recent events, USNWR agreed to include them. This decision alone moved some schools as much as 100 spots in the rankings (among nearly 200 law schools).

  • Its new “diversity” ranking also does not include multiracial students (those who consider themselves members of more than one racial group). USNWR is considering that and has decided to delay the release of these new rankings.

  • A new component of the rankings on library resources added “number of hours” the law library was available to students, 0.25% of the rankings. Methodological errors forced USNWR to recalculate the figures. This component—a 1 in 400th component, mind you—altered the ranking of more than 30 schools, and some by as much as six spots.

  • Another new component of the rankings on library resources, another one worth 0.25%, added “ratio of credit-bearing hours of instruction provided by law librarians to full-time equivalent law students.” Those errors resulted in USNWR pulling the metric entirely, and adding weight to bar passage rated from 2% to 2.25% of the ranking. This decision—again, only a 1 in 400th part of the rankings—shifting another 35 schools.

These last two components as new metrics strike me as strange. Is a law school better off if its librarians teach more student electives than providing research support and assistance to students and faculty? Is a law school better if its student can access the library (not just the law school, the law school library) between 2 and 5 am? That’s what the new metrics do. UPDATE: For more specific critiques about the library metrics, see here.

This potpourri of new metrics is even worse by the fact that USNWR can’t even assess its own rankings correctly. It’s issued multiple retractions

  • Congressional hearing. Congress assuredly has an interest in near-monopolistic behavior from an entity that increases the price of legal education and that serves as a major indicator to students who choose to enter the legal profession. It systematically undervalues public schools that are low-cost institutions by inflating emphasis on expenditures per student; and it routinely undervalues particular institutions like Howard University, one that consistently places in the upper tier of elite law firm placement and remains deeply esteemed by hiring attorneys. These strike me as ripe matters of public concern for investigation. If Congress can call tech companies to the mat, why not the rankings entity?

  • Pay-for-access boycott. USNWR charges law schools $15,000 to see the data they already provide. It strikes me that given the low value and quality of the data, schools should just stop paying for it. Even cutting 10 schools out deprives USNWR of $150,000 in quasi-extortion cash. Sure, some schools will lose opportunities to “game” the rankings by digging in and comparing figures. But maybe every-other year access—halving USNWR revenue—will stifle it.

  • Survey participation boycott. This is two-fold. The first is a refusal to fill out the survey data requests each fall. Of course, USNWR can independently collects some things if it wants to, like LSAT score and 10-month employment figures. But it can’t replicate it all. This is, of course, a collective action problem. But a second is a refusal to fill out the peer-reviewed surveys. That’s a separate problem, but I think there’s a decent solution: spike the survey. That is, fill out your own school as a 5, and all other schools as a 1. That maximizes the value to your own school while at the same time incentivizing others to render the survey meaningless. If USNWR wants to start discounting surveys it views as “gaming,” let it articulate that standard.

  • Alternative rankings developments. Law schools, of course, hate to be compared with one another in a single ranking. But schools and students are going to use them. Why not develop metrics that law schools deem “appropriate”—such as a principal component analysis of employment outcomes—with its own separately-administered peer review score, among other things? That strikes me as a better way forward, breaking the monopoly by developing approved alternative metrics.

Of course, I imagine these, like most such projects, would fall to infighting. It’s one thing for law schools to write a strongly-worded letter decrying what USNWR is doing. It’s another thing to, well, do something about it. I confess my solutions are half-baked and incomplete means of doing so.

But if there’s a moment to topple USNWR law school rankings, it is now. We’ll see if law schools do so.

The hollowness of law school rankings

“We’re a top-ranked law school.”

Those words, in their various forms, are found everywhere in legal education marketing materials. They are hollow words. In my reflection, they grow more hollow each year.

It’s hard for me to think of where to begin a post like this one. Maybe I’ll start with what we think make a great law school. It’s great people, in a great community, doing great things. And others might have different definitions. But let’s start here.

*

Each of these requires a preexisting definition of “great.” Great at what?

The first is great people. It draws faculty who excel at writing and speaking, teaching and mentoring, reading and listening. It’s people who as passionate about these aspects of legal scholarship and legal education, who aspire to give their students meaningful guidance as they begin their careers. It draws students who are engaged and active in the classroom, inquisitive and active, eager for journals and service to the community.

The second is a great community. It’s one thing to have great people working in silos, or great students studying and going their own ways. But to have a great community builds upon those assets, people who can support one another to ensure that articles are even sharper in their clarity and argument, that classroom experiences are even more meaningful to students by learning from one another, that employment opportunities for students are supported across the faculty, staff, and students to build a culture commitment to student success.

The third is doing great things. This requires some look at the outputs—the quality of the articles and books from the faculty, the influence of law journals and centers at the law school, the success (more than just “elite placement”) of students in legal careers in the short-term and the long-term. It can take a lot of forms, traditional legal scholarship and engagement with the legislature, bar, and bench; placement in elite law firms and public interest work; advancing interests in the local community and in the nation as a whole.

The broader the pool in each class—more great people, stronger community engagement, higher output of great achievements—the better the institution.

*

From a prospective students view, assessing these things is difficult. It can be a challenge for a prospective law student to know exactly what “great” looks like. A student may want to do X or Y kind of law, but not really know what that means if an institution discusses its programs there or its alumni in that field, or how to weigh that against other competing concerns—or if it’s all just hype that doesn’t translate into the results one may want. Or a prospective student may not know exactly what she wants to do (particularly true of first-generation law students), and be at a loss of how to compare these things.

There is a temptation, then, to seek out advice. Undoubtedly, those with attorneys in the family or those in upper-class social strata or education circles get advice of varying types. But many also look for external validation, because it can be difficult to make assessments based on the representations of schools alone.

*

External validation can be rankings. I’ve been highly critical (admittedly, an easy position for a law professor to take!) of most law school rankings—at least, those rankings that purport to be comprehensive, to distill everything about a school into a single measure. But I acknowledge there’s a reason they're out there: prospective students in particular look for help evaluating schools.

I confess, I was particularly attracted to rankings early in my blogging career, even ranking the rankings. (Links, mercifully, herein omitted.) Over time, I realize that was largely a symptom of my desire to generate traffic by ranking something, anything, for someone’s feedback. That’s not to say comparing law schools is unimportant, particularly for prospective students. But it’s to turn rankings into, well, clickbait. And perhaps the most clickbait-y of all are singular rankings that aggregate a series of factors for one, “true” ranking. Rankings can't do that.

*

It’s the convergence of a few things, then, that may give rise to this hollowness of rankings. One is the tyranny of metrics, the obsession of measuring everything and evaluating everything on the basis of those measurements. I’m all for the "data-driven” or empirical evaluation of what we do. The tyranny part comes when those measures are used at the expense of all others, or used without proper acknowledgement of their limitations.

The bulk of rankings methodologies are much older than the available “analytics” we have and may desire to use today. Consider, again, USNWR, which includes a significant amount of inputs in its rankings, and which are not, in my judgment, useful. For instance, law students should worry much less about incoming metrics—essentially, self-congratulatory admissions-oriented metrics—and instead look at student outcomes.

I’ve tried to look more at student outcomes, from institutions’ commitments to reducing debt loads, to debt-to-income ratios of graduates, to employment outcomes at graduation, to federal judicial clerkship outcomes. Others have built on employment outcomes, too, in ways that are more helpful and more lucid than the USNWR figures (the published figures, for what it’s worth, are not the figures it uses in its actual ranking).

But unquestionably, the most alluring rankings are, really, any rankings, good or bad, that put a school in a good light (and may validate a prospective law student’s desire). Free pre-law magazines make them up. Blogs make them up. Clickfarms make them up.

*

Those rankings are everywhere. And it allows schools, with ease, to cite them. But the line, “We’re a top-ranked law school,” reflects two great weaknesses of so many law schools: lack of confidence and a lack of vision.

Lack of confidence arises from the inability to articulate to others—prospective students, current students, alumni, donors, faculty, staff, and the larger university—of what the school is accomplishing. It might be that too many overstatements of a school’s achievements now fall on deaf ears. Or that there’s simply distrust in self-promotional presentations of a school’s accomplishments. And it’s recognizing that these “others” won’t necessarily heed the list of accomplishments without some reference to some ranking—as weak or as hollow as the ranking may be—to shore up the chronicles of success about the institution.

Lack of vision arises from an inability to articulate success. Rather than define success to a public audience, they rely on others’ definitions of success as validated through a ranking, and they promote that ranking as the end, as the definition of success.

I admit, it might simply be that these others prefer to have some external validation of the school’s quality, rather than something internal. But schools could readily identify the things I pointed out at the beginning: what makes the school great? It can be data-driven, or it can be a qualitative narrative. Ideally, it’s a combination. Schools should have confidence in their own vision as they’ve articulated and measured it, and they should be able to persuade relevant outsiders about why the law school is succeeding on these terms, not on someone else’s terms.

Maybe that’s all too idealistic. It’s impossible to unring the bell of rankings. But I think schools should be spending much more effort thinking about how to define success and how to communicate that.

*

Here we sit on the eve of yet another USNWR ranking, one that gives weight to inputs, to dated measures like how much money a school spends on its electric bills—to an overall ranking that moderately correlates with some ways that we can think of “good” schools. But it’s time for schools to think about how hollow these rankings are, and to think about how to move beyond them in ways to persuade prospective students, the greater academic community, and the public about the institution’s value.

I’ve had a version of this post drafted in my blog queue for several years. I’ve been tweaking it now and then, and just never got around to posting it. These are my initial thoughts, that of course merit much deeper evaluation in the future!

Federal judges are announcing future vacancies as an extremely high rate

Last fall, I noted that federal judges were announcing future vacancies a historically low rate ahead of Election Day. I posited several reasons why that might be the case, but recent events suggest it’s attributable to partisan reasons.

Here’s the difference between November 1 future vacancies in a presidential election year, with February 1 future vacancies after the election:

November 1, 2000: 11 - February 1, 2001: 9

November 1, 2004: 23 - February 1, 2005: 17

November 1, 2008: 19 - February 1, 2009: 10

November 1, 2012: 19 - February 1, 2013: 19

November 1, 2016: 17 - February 1, 2017: 13

November 1, 2020: 2 - February 1, 2021: 15

That 2021 figure is deceptively low. Another 5 federal judges announced their intention to go senior in the first week of February. Several others took senior status since January 20.

(Maybe unsurprisingly, some judges announce a year-end plan to retire (12/31 or 1/1), which occurs between Election Day and a new presidential administration. I think that’s why a number of announced future vacancies convert to actual vacancies.)

I’m sure there’s more precise ways of examining these figures going forward, and it’ll take some time for the full effects to shake out. But we’re witnessing an extremely high rate of announcements from federal judges, timed to a new presidential administration and razor-thin co-partisan control of the Senate.