Does a school's "ultimate bar passage" rate relate to that school's quality?

With a loss of data that USNWR used to use to assess the quality of law schools, USNWR had to rely on ABA data. And it was already assessing one kind of outcome, first-time bar passage rate.

It introduced “ultimate bar passage” rate as a factor in this year’s methodology, with a whopping 7% of the total score. That’s higher than the median LSAT score now. It’s also much higher than the at-graduation rate in previous methodologies (4%).

Here’s what USNWR had to say about this metric:

While passing the bar on the first try is optimal, passing eventually is critical. Underscoring this, the ABA has an accreditation standard that at least 75% of a law school’s test-taking graduates must pass a bar exam within two years of earning a diploma.

With that in mind, the ultimate bar passage ranking factor measures the percentage of each law school's 2019 graduates who sat for a bar exam and passed it within two years of graduation, including diploma privilege graduates.

Both the first-time bar passage and ultimate bar passage indicators were used to determine if a particular law school is offering a rigorous program of legal education to students. The first-time bar passage indicator was assigned greater weight because of the greater granularity of its data and its wider variance of outcomes.

There are some significant problems with this explanation.

Let’s start at the bottom. Why did first-time bar passage get greater weight? (1) “greater granularity of its data” and (2) “its wider variance of outcomes.”

Those are bizarre reasons to give first-time bar passage greater weight. One might have expected that there would be an explanation (right, I think) that first-time bar passage is more “critical” (more than “optimal”) for employment success, career earnings, efficiency, and a host of reasons beneficial to students.

But, it gets greater weight because there’s more information about it?

Even worse, because of wider variance in outcomes? The fact that there’s a bigger spread in the Z-score is a reason to give it more weight?

Frankly, these reasons are baffling. But maybe no more baffling than the opening justification. “Passing eventually is critical.” True. But following that, “Underscoring this, the ABA has an accreditation standard that at least 75% of a law school’s test-taking graduates must pass a bar exam within two years of earning a diploma.”

That doesn’t underscore it. If eventually passing is “critical,” then one would expect the ABA to require a 100% pass rate. Otherwise, schools seem to slide by with 25% flunking a “critical” outcome.

The ABA’s “ultimate” standard is simply a floor for accreditation purposes. Very few schools fail this standard. The statistic, and the cutoff, are designed for a minimal test of whether the law school is functioning appropriately, at a very basic level. (It’s also a bit circular, as I’ve written about—why does the ABA need to accredit schools separate and apart from the bar exam if it’s referring to accreditation standards as passing the bar exam?)

And why is it “critical”?

USNWR gives “full credit” to J.D.-advantage jobs, not simply bar passage-required jobs. That is, its own methodology internally contradicts this conclusion. If ultimately passing the bar is “critical,” then one would expect USNWR to diminish the value of employment outcomes that do not require passing the bar.

Let’s look at some figures, starting with an anecdotal example.

The Class of 2020 at Columbia had a 96.2% ultimate bar passage rate. Pretty good—but good for 53d nationwide. The gap between 100% and 96.2% is roughly the gap between a 172 median LSAT and a 163 median LSAT. You are reading that correctly—this 4-point gap in ultimate bar passage is the same as a 9-point gap at the upper end of the LSAT score range. Or, the 4-point gap is the equivalent to the difference in a peer score of 3.3 and a peer score of 3.0. In other words, it’s a lot.

Now, the 16 students at Columbia (among 423!) who attempted the bar exam once but did not pass it may say something. It may say that they failed four times, but that seems unlikely. It may be they gave up—possible, but why give up? It could be that they found success in careers that did not require bar passage (such as business or finance) and, having failed the bar exam once, chose not to try to take it.

It’s hard to say what happened, and, admittedly, we don’t have the data. If students never take the bar, they are not included in this count. And so maybe there’s some consistency in the “J.D. advantage” category (i.e., passing the bar exam is not required) as a “full credit” position. But for those who opt for such a job, half-heartedly try the bar, fail, and give up—well, they fall out of the “ultimate bar passage” category.

Another oddity is that the correlation between first-time passage rate (that is, over- and under-performance relative to the jurisdiction) and ultimate bar passage rate is good, but at 0.68 one might expect two different bar passage measures to be more closely correlated. And maybe that’s good not to have measures so closely bound with one another. But these are literally both bar passage categories. And they seem to be measuring quite different things.

(Note that including the three schools from Puerto Rico, which USNWR did for the first time this year, distorts this chart.)

You’ll see there’s some correlation, and it maybe tells some stories about some outliers. (There’s a caveat in comparing cohorts, of course—this is the ultimate pass rate for the Class of 2020, but the first-time rate for the Class of 2022.) Take NCCU. It is in a state with a lot of law schools with students with high incoming predictors, whose graduates pass the bar at high rates. NCCU appears to underperform relative to them on the first-time metric. But its graduates have a high degree of success on the ultimate pass rate.

So maybe there’s some value in offsetting some of the distortions for some schools that have good bar passage metrics but are in more competitive states. If that’s the case, however, I’d think that absolute first-time passage, rather than cumulative passage, would be the better metric.

Regardless, I think there’s another unstated reason for using this metric: it’s publicly available. Now that a number of law schools have “boycotted” the rankings, USNWR has had to rely on publicly available data. They took out some factors and they devalued others. But here’s some publicly available data from the ABA. It’s an “output,” something USNWR values more now. It’s about bar passage, which is something it’s already looking at. It’s there. So, it’s being used. It makes more sense than the purported justifications that USNWR gives.

And it’s given 7% in the new rankings. That’s a shocking amount of weight to this metric for another reason: what students actually rely on this figure?

When I speak to prospective law students (whether or not they’re planning to attend a school I’m teaching at), I have conversations about employment outcomes, yes. About prestige and reputation. About cost and about debt. About alumni networks. About geography. About faculty and class size.

In thirteen years of legal education, I’m not sure I’ve ever thought to mention to a student, “And by the way, check out their ultimate bar passage rate.” First time? Sure, it’s happened. Ultimate? Can’t say I’ve ever done it. Maybe that’s just reflecting my own bias. But I certainly don’t intend to start now. If I were making a list of factors I’d want prospective students to consider, I’m not sure “ultimate bar passage rate” would be anywhere on the list.

In any event, this is one of the more bizarre additions to the rankings, and I’m still wrapping my head around it.