A few longer thoughts on the four debates about the bar exam

As the debate rages in California and other places about the utility of the bar exam, it's become fairly clear that a number of separate but interrelated debates have been conflated. There are at least four debates that have been raging, and each requires a different line of thoughts--even if they are all ostensibly about the bar exam.

First, there is the problem of the lack of access to affordable legal representation. Such a lack of access, some argue, should weigh in favor of changing standards for admission to the bar, specifically regarding the bar exam. But I think the bar exam is only a piece of this debate, and perhaps, in light of the problem, a relatively small piece. Solutions such as implementing the Uniform Bar Exam; offering reciprocity for attorneys admitted in other states; reducing the costs to practice law, such as lowering the bar exam or annual licensing fees; finding ways to make law school more affordable; or opening up opportunities for non-attorneys to practice limited law, as states like Washington have done should all be matters of consideration in a deeper and wider inquiry. (Indeed, California's simple decision to reduce the length of the bar exam from three days to two appears to have incentivized prospective attorneys to take the bar exam: July 2017 test-takers are up significantly year-over-year, most of that not attributable to repeat test-takers.)

Second, there is the problem of whether the bar exam adequately evaluates the traits necessary to determine whether prospective attorneys are minimally competent to practice. It might be the case that requiring students to memorize large areas of substantive law and evaluating their performance on a multiple-choice and essay test is not an ideal way for the State Bar to operate. Some have pointed to a recent program in New Hampshire to evaluate prospective attorneys on a portfolio of work developed in law school rather than the bar exam standing alone. Others point to Wisconsin's "diploma privilege," where graduates of the University of Wisconsin and Marquette University are automatically admitted to the bar. An overhaul of admissions to the bar generally, however, is a project that requires a much larger set of considerations. Indeed, it is not clear to me that debates over things like changing the cut score, implementing the UBE, and the like are even really related to this issue. (That said, I do understand those who question the validity of the bar exam to suggest that if it's doing a poor job of separating competent from incompetent attorneys, then it ought to have little weight and, therefore, the cut score should be relatively low to minimize its impact among likely-competent attorneys who may fail.)

Third, there is the problem of why bar exam passing rates have dropped dramatically. This is an issue of causation, one that has not yet been entirely answered. It is not because the test has become harder, but some have pointed to incidents like ExamSoft or the addition of Civil Procedure as factors that may have contributed to the decline. A preliminary inquiry from the California State Bar, examining the decline just in California, identified that a part of the reason for the decline in bar passage scores has been the decline in the quality of the composition of the test-takers. I became convinced that was the bulk of the explanation nationally, too. An additional study in California is underway to examine this effect with more granular school-specific data. If the cause is primarily a decline in test-taker quality and ability, then lowering the cut score would likely change the quality and ability of the pool of available attorneys. But if the cause is attributable to other causes, such as changes in study habits or test-taker expectations, then lowering the cut score may have less of such an impact. (Indeed, it appears that higher-quality students are making their way through law schools now.) Without a thorough attribution of cause, it is difficult to identify what the solution ought to be to this problem.

Fourth, there is the debate over what the cut score ought to be for the bar exam. I confess that I don't know what the "right" cut score is--Wisconsin's 129, Delaware's 145, something in between, or something different altogether. I'm persuaded that pieces of evidence, like California's standard-setting study, may support keeping California's score roughly in place. But it is just one component of many. And, of course, California's high cut score means that test-takers fail at higher rates despite being more capable than most test-takers nationally. Part of that is, I'm not sure I fully appreciate all the competing costs and benefits that come along with changes in the cut score. While my colleague Rob Anderson and I find that lower bar scores are correlated with higher career discipline rates, facts like these can only take one so far in evaluating the "right" cut score. Risk tolerance and cost-benefit analysis have to do the real work.

(I'll pause here to note the most amusing part of critiques of Rob's and my paper. We make a few claims: lower bar scores are correlated with higher career discipline rates; lowering the cut score will increase the number of attorneys subject to higher career discipline rates; the state bar has the data to evaluate with greater precision the magnitude of the effect. No one has yet disputed any of these claims. We don't purport to defend California's cut score, or defend a higher or lower score. Indeed, our paper expressly disclaims such claims! Nevertheless, we've faced sustained criticism for a lot of things our paper doesn't do--which I suppose shouldn't be surprising given the sensitivity of the topic for so many.)

There are some productive discussions on this front. Professor Joan Howarth, for example, has suggested that states consider a uniform cut score. Jurisdictions could aggregate data and resources to develop a standard that they believe best accurately reflects minimum competence--without the idiosyncratic preferences of this state-by-state process. Such an examination is worth serious consideration.

It's worth noting that state bars have done a relatively poor job of evaluating the cut scores. Few evaluate them much at all, as California's lack of scrutiny for decades demonstrates. (That said, the State Bar is now required to undertake an examination of the bar exam's validity at least once every seven years.) States have been adjusting, and sometimes readjusting, the scores with little explanation.

Consider that just in 2017 alone, Connecticut is raising its cut score from 132 to 133, Oregon is lowering it from 142 to 137, Idaho from 140 to 136, and Nevada from 140 to 138. Some states have undergone multiple revisions in a few years. Montana, for instance, raised its cut score from 130 to 135 for fear it was too low, then lowered it to 133 for fear it was too high. Illinois planned on raising its cut score from 132 in 2014 to 136 in 2016, then, after raising the score to 133, delayed implementing the 136 cut score until 2017, and delayed again in 2017 "until further order." Certainly, state bars could benefit from more, and better, research.

Complicating these inquiries are mixed motives of many parties. My own biases and priors are deeply conflicted. At times, I find myself distrustful of any state licensing systems that restrict competition, and wonder whether the bar exam is very effective at all, given the closed-book memory-focused nature of the test. I worry when many of my students who'd make excellent attorneys fail the bar, in California and elsewhere. At other times, I find myself persuaded by studies concerning the validity of the test (given its high correlation to law school grades, which, I think, as a law professor, are often, but not always, good indicators of future success), and by the fact that, if there's going to be a licensing system in place, then it ought to try to be as good as it can be given its flaws and all.

At times, though, I realize these thoughts are often in tension because they are sometimes addressing different debates about the bar exam generally--maybe I'd want a different bar exam, but if we're going to have one it's not doing such a bad job; maybe we want to expand access to attorneys, but the bar exam is hardly the most significant barrier to access; and so on. And maybe even here, my biases and priors color my judgment, and with more information I'd reach a different conclusion.

In all, I don't envy the task of the California Supreme Court, or of other state bar exam authorities, during a turbulent time for legal education in addressing the "right" cut score for the bar exam. The aggressive, often vitriolic, rhetoric from a number of individuals concerning our study in discipline rates is, I'm sure, just a taste of what state bars are experiencing. But I do hope they are able to set aside the lobbying of law deans and the protectionist demands of state bar members to think carefully and critically, with precision, about the issues as they come.

A poor attorney survey from the California State Bar on proposals to change the bar exam cut score

I'm not a member of the California State Bar (although I've been an active member of the Illinois State Bar for nearly 10 years), so I did not receive the survey that the state bar circulate late last week. Northwestern Dean Dan Rodriguez tweeted about it, and after we had an exchange kindly shared the survey with me.

I've defended some of the work the Bar has done, such as its recent standard-setting study, which examined bar test-taker essays to determine "minimum competence." (I mentioned the study is understandably limited in scope and particularly given time. The Bar has shared a couple of critiques of the study here, which are generally favorable but identify some of the weaknesses in the study.) And, of course, one study should not so determine what the cut score ought to be, but it's one point among many studies coming along.

Indeed, the studies, so far, have been done with some care and thoughtfulness despite the compressed time frame. Ron Pi, Chad Buckendahl, and Roger Bolus have long been involved in such projects, and their involvement here has been welcome.

Unfortunately, despite my praise with some caveats about understandable limitations, the State Bar has circulated a poor survey to members of the State Bar about the proposed potential changes to the cut score. Below are screenshots of the email circulated and most of the salient portions of the survey.

It is very hard to understand what this survey can accomplish except to get a general sense of the bar about their feelings about what the cut score ought to be. And it's not terribly helpful in addressing the question about what the cut score ought to be.

For instance, there's little likelihood that attorneys understand what a score of 1440, 1414, or "lower" means. There's also a primed negativity in the question "Lower the cut score further below the recommended option of 1414"--of course, there were two recommended options (hold in place, or lower to 1414), with not just "below" but "further below." Additionally, what do these scores mean to attorneys? The Standard-Setting Study was designed to determine what essays met the reviewing panel's definition of "minimum competence"; how would most lawyers out there know what these numbers mean in terms of defining minimum competence?

The survey, instead, is more likely a barometer about how protectionist members of the State Bar currently are. If lawyers don't want more lawyers competing with them, they'll likely prefer the cut score to remain in place. (A more innocent reason is possible, too, a kind of hazing: "kids these days" need to meet the same standards they needed to meet when getting admitted to the bar.) To the extent the survey is controlling whether to turn the spigot to control the flow of lawyers, to add more or to hold it in place, it represents the worst that a state bar has to offer.

The survey also asks, on a scale of 1 to 10, the "importance" attorneys assign to "statements often considered relevant factors in determining an appropriate bar exam cut score." These answers vary from the generic that most lawyers would find very important, like "maintaining the integrity of the profession," to answers that weigh almost exclusively in favor of lowering the cut score, like "declining bar exam pass rates in California."

One problem, of course, is that these rather generic statements have been tossed about in debates, but how is one supposed to decide which measures are appropriate costs and benefits? Perhaps this survey is one way of testing the profession's interests, but it's not entirely clear why two issues are being conflated: what the cut score ought to be to establish "minimum competence," and the potential tradeoffs at stake in decisions to raise or lower the cut score.

In a draft study with Rob Anderson, we identified that lower bar scores are correlated with higher discipline rates and that lowering the cut score would likely result in higher attorney discipline. But we also identified a lot of potential benefits from raising the score, which have been raised by many--greater access to attorneys, lower costs for legal services for the public, and so on. How should one weigh those costs and benefits? That's the sticky question.

I'm still not sure what the "right" cut score is. But I do feel fairly certain that this survey to California attorneys is not terribly helpful in moving us toward answering that question.

More evidence suggests California's passing bar score should roughly stay in place

Plans to lower California's bar exam score may run up against an impediment: more evidence suggesting that the bar exam score is about where it ought to be.

My colleague Rob Anderson and I recently released a draft study noting that lower bar passage scores are correlated with higher discipline rates, and urging more collection of data before bar scores are lowered.

There are many data points that could, and should, be considered in this effort. The California state bar has been working on some such studies for months. California students are more able than students in other states but fail the bar at higher rates, because California's cut score (144, or in California's scoring 1440, simply 10x) is higher than most other jurisdictions.

Driven by concerns expressed by the deans of California law school, and at the direction of the California Supreme Court, the State Bar began to investigate whether the cut score was appropriate. One such study was a "Standard Setting Study," and its results published last week. It is just one data point, with obvious limitations, but it almost perfectly matches the current cut score in California.

A group of various practitioners looked at a batch of bar exam essays. They graded them. They assessed a score of "not competent," "competent," or "highly competent." They were refined to find that "tipping point," from "not competent" to "competent." (An evaluation of the study notes this is similar to what other states have done in their own standard-setting studies, which have resulted in a variety of changes to bar pass cut scores; and that independent evaluation identified critiques to the study but concluded it was overall sound methodology and valid results.)

The mean recommended passing score from the group was 145.1--more than a full point higher than the actual passing score! The median passing score was 143.9, almost identical to the 144.0 presently used. (The study explains why it believes the median is the better score.)

Using a +/-1 error standard of deviation, the mean score may range from 143.6 to to 148.0; the median score 141.4 to 147.7. All are well short of the 133-136 scores common in many other jurisdictions, including New York's 133. And this study is largely consistent with a study in California 30 years ago when a similar crisis arose over low passing rates, a study I identified in a recent blog post.

So, what to do with this piece of evidence? The researchers offered two recommendations for public comment and consideration: keep the score where it is; or reduce the passing score needed to 141.4 for the July 2017 exam alone. (Note: what a jackpot it would be to the bar test-takers this July if they received a one-time reprieve!) The recommendations nicely note many of the cost-benefit issues policymakers ought to consider--and includes some reasons why California has policy preferences that may weigh in favor of a lower score (at least temporarily). The interim proposal to reduce to a 141.4, one standard error below the recommended median value of 143.9, takes into account these policy considerations. Such a change may be modest, but it could result in a few hundred more bar test-takers passing the bar on the first attempt in California.

Alas, the reaction to a study like this has been predictable. Hastings Dean David Faigman--who called our study "irresponsible," accused the State Bar of "unconscionable conduct" for the July 2016 bar exam--waited until the results dropped to critique the study with another quotable adjective soundbite, labeling the study "basically useless." (A couple of other critiques are non-responsive to the study itself.)

Of course, one piece of data--like the Standard Setting Study--should not dictate the future of the bar exam. Nor should the study of my colleague Rob Anderson and me. Nor should the standard setting study 30 years ago. But they all do point toward some skepticism that the bar exam cut score it dramatically or outlandishly too high. It might be, as the cost-benefit and policy analysis in the recommendations to the state bar suggest, that some errors ought to be tolerated with a slight lowering of the score. Or it might be that the score should remain in place.

Whatever it is, more evidence continues to point toward keeping it roughly in the current place, and more studies in the future may offer new perspectives on the proper cut score.

Does the bar exam adequately test prospective lawyers' minimum competence?

The critiques of the bar exam have grown louder over the last few years on the heels of declining bar pass rates. But the most popular critiques have changed somewhat. It used to be that external factors--such as the ExamSoft debacle--were a target. Then came charges that the bar exam was harder than usual. But the most recent charges are actually quite a longstanding critique of the bar exam--it simply isn't a good measure of prospective lawyers' "minimum competence."

The bar has attempted to adjust in the last fifty years. Many states now have a "performance test," a component designed to simulate what lawyers do--test-takers are given some law and some facts and asked to address the problem with a legal task. That said, performance tests moderately correlate with other elements of the bar exam and perhaps are not performing the function some hoped they would serve.

Regardless, critiques of the bar exam are longstanding, and some of the most popular critiques look something like this: why did a state, like California, pick this score as a passing score for "minimum competence"? And why is the bar exam any good at testing the kinds of things that lawyers actually do? The bar exam is a three-day (in California, beginning this July, two-day), closed book test with multiple choice and timed essay questions that in no way resembles the real world of law practice. Why should we trust this test?

It's a fair point, and it's one best met with a question: what ought the bar test? And, perhaps a more subtle question: what if it turns out that the answer to what the bar ought to test actually aligns quite closely with the results from the existing bar exam?

A study in 1980 in California is one of the most impressive I've seen on this subject. And while it's a little old, it's the kind of thing that ought to be replicated before state bars go about making dramatic changes to their exams or scoring methods. I'll narrate what happened there. (For details, consider two reports on the study and the testimony presented to California lawmakers asking the exact same questions in 1984, after the particularly poor performance of applicants to the state bar on the July 1983 bar exam--a historically low score essentially matched in the July 2016 administration.)

After the July 1980 bar exam in California, the National Conference of Bar Examiners teamed up with the California Committee of Bar Examiners to run a study. They selected 485 applicants to the bar who had taken the July 1980 exam. Each of these applicants took an additional two-day test in August 1980.

The two-day test required participants to "function as counsel for the plaintiff in a simulated case" on one day, and "counsel for the defendant in a different simulated case" the other day. Actors played clients and witnesses. The participants were given oral and written tasks--client interviews, discovery plans, briefs, memoranda, opening statements, cross-examination, and the like. They were then evaluated among a number of dimensions and scored.

In the end, the scores were correlated to the applicants' bar exam scores. The relationship between the scores and the general bar exam scores were fairly strong--"about as strong as the underlying relationship between the Essay and MBE section of the [General Bar Exam]." "In short," the study concluded, the study and the bar exam "appear to be measuring similar but not identical abilities."

Additionally, a panel of 25 lawyers spent more than two days with extended in-depth evaluation of 18 of these participants. The panelists were clinical professors, law professors, attorneys, judges, and others with a variety of experience. The panelists were asked to evaluate these 18 participants' performance among the various dimensions along a scale of "very unsatisfactory" (i.e., fail) to "borderline" to "very satisfactory" (i.e., pass). The panel's judgments about the pass/fail line was consistent with the line where it was drawn on the California bar exam (with the caveat that this was a sample of just 18 applicants).

It might be that there are different things we ought to be testing, or that this experiment has its own limitations (again, I encourage you to read it if you're interested in the details). But before anything is done about the bar exam, it might be worth spending some time thinking about how we can evaluate what we think ought to be evaluated--and recognize that there are decades of studies addressing very similar things that we may ignore to our peril.

The best ways to visualize the impact of the decline in bar passage scores

I've visualized a lot about the decline in bar pass scores and bar passage rates in the last few years, including a post on the February 2017 decline here. For some reason, this post in particular drew criticism as being particularly deceptive. It caused me to think a little more about how to best visualize--and explain--what the decline in multistate bar exam ("MBE") scores might mean. (I'll channel my inner Tufte and see what I can do....)

In the February 2017 chart, I didn't start the Y-axis at zero. And why should I? No one scores a zero. The very lowest scores are something in the 50s to 90s. And the score is on a 200-point scale, but no one gets a 200. So I suppose I could visualize it on the low to high ends--say, 90 to 190.

When you put it that way, it looks completely unremarkable. MBE scores have dipped a bit, but they've hardly moved at all. And it looks like my last post was simply clickbait. (It's worth noting I generate no revenue from this site!)

But that surely can't be right, either. After all, bar passage rates have been declining fairly sharply in the last few years even if this mean score has only moved relatively nominally. (For extensive discussion, see the "Bar exam" category on this blog.)

That's because what really matters is the passing score or the "cut score" in each jurisdiction.

Suppose the cut score in a jurisdiction is 100. A decline from a mean score of 135 to 134 should have essentially no effect if the results are distributed among a typical bell curve (and they usually are). That's because virtually everyone would still pass even if scores dropped a bit. In contrast, if the cut score were 180, a decline from a mean score of 135 to 134 should also have essentially no effect--virtually everyone would still fail.

But the reason for the perilous drop in bar pass rates is because this is exactly the spot where the mean scores have begun to hit the cut scores in many jurisdictions. Here's a visualization of what looks like, with a couple of changes--a larger y-axis, historical data for the February bar back to 1976, and gridlines identifying the cut scores in several jurisdictions. (It's worth noting that this is the national MBE mean, not individualized state means; July scores are somewhat higher; and it is a mean, not a median.)

You can see that the drop in the means plunges scores past what have been cut scores in many jurisdictions.

One more way of explaining why a drop at this point of the bell curve is particularly significant. The NCBE has not yet released the distributions of scores, but the bell curve linked above should be instructive, and the change from 2011 to 2016 is useful to consider.

In February 2011, just 39.6% of all test-takers had a score of 135.4 or lower. 13.7% had a score in the range of 135.5 to 140.4, and 46.6% had a score of 140.5 or higher. (Consider the chart above for references as to what those scores might mean.) In February 2016, however, 51.1% of all test-takers had a score of 135.4 or lower, a 11.5-point jump. 13.7% had a score in the range of 135.5 to 140.4, and just 35.1% had a score of 140.5 or higher.

That's because this particular drop in the score is at a very perilous spot on the curve. Bar takers are performing just a little worse in a relative sense. But when the distribution of performance is put up against the cut score, this is precisely the point that would have the most dramatic national impact.

I hope these explanations help illustrate what's happening on the bar exam front--and, of course, I welcome corrections or feedback to improve these visualizations in the future!

February 2017 MBE bar scores collapse to all-time record low in test history

UPDATE: Some wondered about the scale used for the visualization below, and I respond with some thoughts in a subsequent blog post.

On the heels of the February 2016 multistate bar exam (MBE) scores reaching a 33-year low, including a sharp drop in recent years, and a small improvement in the July 2016 test while scores remained near all-time lows, we now have the February 2017 statistics, courtesy of Pennsylvania (PDF). After a drop from 136.2 to 135 last year, scores dropped another full point to 134. It likely portends a drop in overall pass rates in most jurisdictions.

This is the lowest February score in the history of aggregated MBE results. (The test was first introduced in 1972 but, as far as I know, national aggregate statistics begin in 1976, as data demonstrates.) The previous record low was 134.3 in 1980.

It's worth noting that the February 2017 test had a small change in its administration: rather than 190 question that were scaled into the score and 10 experimental questions, the split in this exam was 175/25. It's unlikely (PDF) this caused much of a change, but it's worth noting as a factor to think about. And it's not because the MBE was "harder" than usual. Instead, it primarily reflects continued fall-out from law schools accepting more students of lower ability, then graduating those students who go on to take the bar exam. Given the relatively small cohort that takes the February test, it's anyone's guess what this will portends for the July 2017 test.

Visualization note: the non-zero Y axis is designed to demonstrate recent relative performance of bar scores, not absolute scores.

California's move to a two-day bar exam might affect some schools more than others

I was among the first to discuss California's planned move from a three-day bar exam to a two-day bar exam. The first two-day exam will occur in the July 2017 administration.

The old three-day model weighted the Multistate Bar Exam component (the 6-hour multiple choice test) at about 1/3 of the overall score, and the other two days of essays as about 2/3 of the overall score. When the bar studied the issues, it found little difference in assessing aptitute or in scoring between a 1/3-2/3 model and a two-day bar where both sections would be weighted roughly equally (as most states do).

That's true at the macro level. For individual test-takers, of course, that can vary wildly. And even at the school level, we may see somewhat noticeable differences between the MBE scores and the essay scores.

Thanks to a pretty sizeable disclosure from the California bar, we can assess how individual schools fared on the bar, and what their scores would look like if scored under the July 2017 1/2-1/2 model.

This, of course, has many limitations, which I'll start listing here. First, these are the mean scores; they correlate highly with pass rates, but not perfectly. Note that Stanford's mean score blows all other schools out of the water, but its first-time pass rate is only a few percentage points better than others. That means movement up or down in the mean scores would likely improve or worsen the pass rate, but in measures not immediately ascertainable. Second, just because the bar was scored this way in July 2016 does not mean we would expect graduates of these schools to perform similarly in 2017. Indeed, evidence like this would probably drive a change in bar study habits! Graduates would be inclined to focus more attention on the MBE and less attention on the essays, which would change the scores in unknown ways.

The chart at the right shows in red circles what schools' mean scores were this July under the 1/3-2/3 scoring model. The blue circles are what the scores would have been under the 1/2-1/2 model. (Recall that a passing score in California is a 1440.) As you can see, there is almost no difference for most schools. I flagged four schools that might see the biggest changes--San Diego's for the better; and Irvine, San Francisco, and Thomas Jefferson for the worse.

And recall the caveats above--this does not mean it will translate into demonstrable differences in the pass rate, and pass performance is not an indicator of future success. This is particularly school for the three schools I identified that might expect lower means--Irvine is well above the passing score, and San Francisco and Thomas Jefferson are well below it, meaning marginal differences in the mean score would probably affect very few. (For schools closer to the 1440 score, we might expect slightly larger differences, again with the significant caveats listed above about the limited value of using the means.) But it should certainly shift attention in graduate preparation next summer--and whether that changes scores remains to be seen.

The collapse of bar passage rates in California

My colleague Paul Caron has helpfully displayed the data of the performance of California law schools in the July 2016 California bar exam. It's worth noting that the results aren't simply bad for many law schools; they represent a complete collapse of scores in the last three years.

The chart here shows the performance of first-time California bar test-takers who graduated from California's 22 ABA-accredited law schools in the July 2013, 2014, 2015, and 2016 administrations of the exam. The blue line in the middle is the statewide average among California's ABA-accredited law schools. (The overall passage rate among all ABA-accredited law schools is usually a point or two lower than this average.)

The top performers are mostly unchanged from their position a few years ago. The middle performers decline at roughly the rate of the statewide average. But the bottom performers show dramatic declines: from 65% to 22%, and from 75% to 36%, to identify two of the most dramatic declines.

It's true that changes to the applicant pool have dramatically impacted law schools, as I identified three years ago and as continues to hold true. There have been fewer applicants for law schools; those applicants are often less qualified applicants--those with lower LSAT scores and UGPAs than previous classes; and schools are not shrinking their class sizes quickly enough to respond to the decline in quality. For some of the more at-risk schools, they face significant attrition each year as their very best students are transferring to higher-ranked institutions, further diluting the quality of the graduating classes. (I've also occasionally read critiques that law schools are not "doing enough to prepare" students to take the bar exam, but I highly doubt law schools have dramatically changed their pedagogy over the last few years to cause such a decline.)

And the decline in bar pass rates in 2014 was the first in a longer stage of declining scores, as I explained back then. And it's not even clear that pass rates have reached bottom.

I noted earlier this year that the new mandate from the ABA that 75% of graduates of law schools must pass the bar exam within two years of graduation will uniquely impact California--despite bar test-takers being far more able in California, they fail at much higher rates. Whether bar pass rates will improve for some of these schools in the future, or whether the state bar intervenes to ease its scorning practices, remains a matter to be seen.

Note: I did not start my Y-axis at 0% to avoid unnecessary white space at the bottom of the graph, and it is designed to show relative performance rather than absolute performance.