Some mixed results for February 2016 first-time bar exam test-takers

I depicted earlier the 33-year low in February Multistate Bar Exam scores from the February 2016 test. But, as one commenter noted, that doesn't necessarily tell us a whole lot about the first-time pass rate, particularly if there is a glut of repeaters taking the bar inflating the failure rate. Now that results are trickling in, we are seeing declining pass rates across the board among first-time test takers--but, at least in a piece of good news for bar test-takers, the declines are not as dramatically sweeping and across-the-board as we've seen before.

The results are decidedly more mixed, albeit with more in the negative direction than a positive direction. Admittedly, it is important to note that these are only a handful of jurisdictions. Additionally, the February pool of test-takers are much smaller than the July pool of test-takers, which may skew results. The following results are the change in pass rates year-over-year, and I listed them only among first-time test-takers, when the statistics were available. (More states had overall pass rates, but these tend to be more valuable in terms of identifying trends among recent law school graduates.)

Florida, -6 points (February 2015: 64%; February 2016: 58%)

Indiana, -12 points (February 2015: 74%; February 2016: 62%)

New Mexico, -9 points (February 2015: 90%; February 2016: 81%)

Oregon, unchanged (February 2015: 69%; February 2016: 69%)

Pennsylvania, +5 points (February 2015: 69%; February 2016: 74%)

Tennessee, +4 points (February 2015: 64%; February 2016: 68%)

Washington, -4 points (February 2015: 75%; February 2016, 71%)

February 2016 MBE bar exam scores drop to lowest point since 1983

I've written extensively about the bar exam, including the significant decline in bar exam scores, specifically the Multistate Bar Exam, and the corresponding the decline in pass rates in most jurisdictions. The February 2016 results are the fourth consecutive exam to display a significant decline in MBE scores. In fact, it's the lowest score on the February test since 1983--even worse than the July 2015 results, which were the lowest since 1988. Below is a visualization of February test scores since 2005--note the precipitous drop in the last two tests. (I may visualize results since 1983 in the future.)

This will likely mean a decline in pass rates in most jurisdictions, news of which will trickle out over the next several weeks. The decline in scores continues to correlate with declines in student quality, as law schools admitted classes with an increased number of students at risk of failing the bar exam. Whether other factors contribute to the decline remains an open question. But this helps illustrate that the problems are not one-time issues as the result of ExamSoft--they are structural and long-term issues with significant consequences. I'll blog more about this in the near future.

Will states like California lower their bar standards to help schools comply with new ABA mandate?

Unintended consequences are common. One develops a great idea; it takes form; it is discussed and debated; and, finally, it takes effect. But it may result in unintended consequences, it's always been fascinating to think about those unintended consequences. I've extensively discussed unintended consequences of matters such as LSAT administration, accommodated LSAT test-taking, and distortions in law school admissions.

The American Bar Association has moved closer to approving a new accreditation standard. At least 75 of law graduates from an institution must pass the bar exam within two years. It is a much simpler rule than the previous standard, and it holds schools to a higher standard.

Might there be unintended consequences? Many schools right now currently fail that standard. Professor Brian Leiter rightly wonders if schools will focus more on bar prep than other aspects of legal education. It is also likely that many schools will seriously reconsider their class sizes, admissions standards, academic dismissal rates, and transfer students.

But it's also worth noting that not all state bar exams are created equal. Perhaps nothing makes that point so clearly as looking at the passing scores required for the Uniform Bar Exam, a standardized bar exam with a single score, and varying scores required for admission in different states. A 260 will pass in Minnesota or Alabama, while a 280 is required to pass in Alaska or Idaho. My colleague Rob Anderson has identified the varying degrees of difficulty of many states' bar exams. And California is at the top--I've identified how California bar test-takers are more capable than test-takers in other states, but they fail at higher rates because of the difficulty of the bar.

So take a state like California. It is very likely that a number of schools will face serious difficulty meeting this standard--the first time rates for many schools are well below 50%, much less 75%, and even students who retake the test may make it a challenge for the total to pass the 75% threshold.

Some schools may begin to "export" students to jurisdictions with easier exams and higher pass rates--perhaps incentivizing them with stipends on the condition they take the exam in an easier jurisdiction.

But that's a potential unintended consequence that is school-centered. Might there be bar-centered consequences?

Suppose the state bar of California suddenly finds that four or five of its law schools are at risk of losing ABA accreditation. While some may praise that outcome, it's not clear that the state bar would do so. It might be inclined to lower its standards to increase pass rates (more in line with other states) and keep its schools in the ABA's good graces. Other states with particularly difficult bar exams, or with law schools that have significant political clout, may do the same.

Of course, this is speculative. And I make no claim as to whether such decisions would be good or bad--one could think some state bars are too difficult and that the pass rates should be increased, or one could think that the bar should not lower its standards. Instead, it's simply to identify some of the potential consequences that may come about from proposals like this. Only time will tell whether such consequences actually arise.

Heat and light, LSAT scores and bar passage data

If you at all frequently read this blog, you're undoubtedly aware that what largely began as my idiosyncratic thoughts about election law have given way to a significant amount of content on legal education and the bar exam.

Recently, many pixels have been used to discuss the utility of the LSAT, and the relationship between LSAT scores and bar pass rates, which has spurred many larger discussions about the nature of legal education. They are easily discoverable.

Bernie Burk several months ago used the metaphor of heat and light in the midst of some such discussions, which I found quite useful. And I commend to all readers Jerry Organ's comments at the Legal Whiteboard, The Opaqueness of Bar Passage Data and the Need for Greater Transparency. Measured, careful, thoughtful analysis is the analysis I find most useful in such discussions--ones that not only concede limitations, but do not minimize such concessions. I remain deeply grateful for the thoughtful contributors in this space who have spurred me to think carefully and critically on all fronts. And I hope my posts remain useful.

Visualizing the overall bar pass rate declines in 2015 across jurisdictions

In early September, I highlighted the warning signs of bar pass rate declines in several jurisdictions. Shortly thereafter, the NCBE disclosed that MBE scores had hit a 27-year low. Last year, I offered a rough compilation of the decline in overall pass rates, suggesting that ExamSoft was not to blame but that the MBE itself may have contributed to the decline. (Later evidence convinced me that the MBE was likely not responsible for the decline.) By March, we had more granular data for jurisdiction-by-jurisdiction results.

Here are the changes in overall bar pass rates between July 2014 and July 2015. for a handful of jurisdictions that have easily-disclosed top-line data.

The overall declines are far from universal in these jurisdictions, and the median decline is about 2 points. But several jurisdictions did experience overall declines of at least 5 points.

Cobbling together the overall results from July 2013 to July 2015, a two-year change, the trends are fairly stark in most jurisdictions, often joining together significant declines in 2014 with modest declines in 2015.

The entering class profiles for the next few law school classes suggest that these trends will continue, at least to a small degree--the total degree remains something of an open question. How schools react, or how prospective bar examinees react, may further change these projections. And next spring, we'll have the data for the first-time bar-pass rates in these jurisdictions, which will provide a slightly more useful comparison of the overall trends.

California bar exam takers are far more able than others nationwide but fail at much higher rates

The California July 2015 bar results were recently released, reflecting a modest drop in scores, slightly less than other jurisdictions this year. The overall pass rate dropped from 48.6% in July 2014 to 46.6%. The first-time pass rate dropped from 61% to 60%. And among California ABA-accredited schools, the first time rate also dropped a point to 68%.

California is one of the rare jurisdictions that also discloses its statewide mean scaled MBE score. The NCBE discloses the nationwide mean scaled MBE score, which has dropped fairly significantly over the last couple of years. But California has consistently outperformed the nationwide cohort, sometimes rather dramatically.

California's mean scaled MBE score was 142.4 this July, 2.5 points higher than the nationwide average of 139.9. In fact, it's even 0.9 points higher than last year's nationwide average.

The performance of California bar takers is even more impressive given that over 8000 people typically take the July bar among the 50,000 or so nationwide MBE bar exam test-takers. Despite representing over 15% of MBE test-takers, California significantly outperforms the national average.

Pennsylvania (142.2) and Tennessee (139.8) posted lower mean scaled MBE scores. But their pass rates are dramatically higher than California's--71.2% and 64.5%, compared to California's 46.6% in July 2015.

Connecticut and Georgia also disclose the mean scaled MBE scores on a school-by-school basis. I plotted those schools with the California, Pennsylvania, and Tennessee overall results to illustrate how dramatic an outlier California is.

California's high cut score means that many test-takers who would pass the bar in another jurisdiction fail the California bar. Indeed, if the California cut score were closer to Georgia, Pennsylvania, or Tennessee, then the overall pass rate would be around 72%. (It might be even higher in Connecticut.) Indeed, about half of those who failed the California bar in July 2015 would have passed in another jurisdiction.

What the "right" cut score should be is something else. And how the essays bear upon the MBE score. And whether these few states are sufficiently representative. But it's a somewhat useful illustration of some data to note that despite the very low pass rates in California, it's not for lack of relative quality of test-takers.

What happens after a test-taker fails the bar on a first attempt? Some data from Texas

Michael Simkovic and Paul Horwitz have a few thoughts on passing, failing, and retaking the bar exam. I had a few things to add--though, I admit, less specifically about their issues identified!

First, I've blogged extensively about the decline in bar pass rates and the expectation that the declines will continue. A few caveats are in order. Much the top line data I use this cycle is based on overall pass rates; first-time test-taker rates are almost always higher, and first-time test-taker rates at ABA-accredited law schools higher still. But the data we have so far from most jurisdictions is limited to overall pass rates (though, some do disclose more specific information); when the more granular is released this spring, I'll discuss that, too. At the same time, the decline even in overall pass rates is a sign of the decline in overall graduate quality.

First-time pass rates are often the gold standard for a number of reasons. The first, perhaps to most school's chagrin, is the factor in U.S. News & World Report that evaluates a school's first-time pass rate in relation to the jurisdiction's overall rate. But importantly, of course, schools prefer bar pass rate success, and that's most easily identified with first-time pass rates. (Professors Simkovic and Horwitz have some more thoughts about the value or importance of those kinds of things, or about certain schools that are perhaps more vulnerable, which I'll reserve for the time being.) And, loan repayment is expected to begin shortly after graduation, which makes bar passage--and turning one's efforts toward a career is perhaps one of the most important things, particularly if one fails the bar and must take into account opportunity costs of taking time to retake or losing potential time and income in a legal career, or consider the sunk cost of legal education and a prior bar failure, or other such matters.

We have little data, however, about what happens once someone fails the bar exam. There are few longitudinal studies of a specific batch of test-takers. Many jurisdictions simply lump all "repeaters" in a single data set; those who indicate the number of the attempt don't indicate when the previous attempt took place.

But one intriguing study from Texas followed the July 2004 bar exam. The data sets aren't as intuitive as the visuals, so I've offered a couple of ways of interpreting what happened to the folks who took the July 2004 exam--and what happened to them over the next four administrations of the exam.

The overwhelming majority of the 2293 first-time test takers passed on the first attempt. But what happened to the other 474? A majority of that remainder passed on subsequent attempts, but a number dropped out with each subsequent round. 62, for instance, never tried again after the first failed attempt. But 224 passed on the second time around. You can see from the graphic that several more dropped out with each subsequent attempt, until the fourth attempt had just 23 test-takers--and 13 of those passed on their fourth try.

Visualizing the outcomes by each administration offers the following perspective, a kind of narrowing filter:

The study further examined the scores of the test-takers. In each subsequent administration of the test, the scores of the test-takers improved. (On average, of course--some had declines, and some improved far more than the average test-taker.)

Understandably, single-attempt test-takers had by far the highest scores on the first attempt--even though they included 61 test-takers who failed the bar and would never attempt the bar again. And, perhaps predictably, those who only took the bar twice had higher scores on the first attempt than those who would go on to take the bar three or four times. But it's notable that in each group, the results improved. Granted, there's some self-selection in the sense that a few dozen who failed an exam would drop out of the next attempt. And, presumably, those motivated to study with greater discipline are those who are going to take the bar on subsequent occasions. But the suggestion from the scores is that continued time and effort to learn the law will ultimately lead to success--over time.

There are different issues about whether these results are good or bad, or whether retaking the bar exam upon failing is a good or bad idea depending upon circumstances, or whether there are other costs with retaking the bar exam in lieu of other options, and so on--many things that have been discussed elsewhere and will continue to be discussed. But, I find these data points of some interest to track what actually happens among those test-takers from a single administration, and using the data as a starting point for considering what to do with changes in pass rates.

Roundup of recent news about legal education and the bar exam

I've been mentioned in a couple of outlets recently regarding legal education and the bar exam, and thought I'd link to those pieces here and include a brief roundup of news and events in this areas across this blog. You can find me on Twitter @derektmuller.

For quick access to categories of interest, check out the Legal education or Bar exam categories on this blog.

Running totals of state bar exams in July 2015 are available here, and an analysis of the 27-year low in the Multistate Bar Exam scores here.

My July 2014 bar exam results wrap-up is here. Summaries of the National Conference of Bar Examiners data in 2014 is here. Finally, my perspective in November 2014 anticipating this and future bar exam score declines: the bleak short-term future for law school bar passage rates.

And I even write about subjects other than the bar exam and legal education! For my scholarly interests--primarily election law--see the remainder of this blog, and my SSRN page.

No, the MBE was not "harder" than usual

I frequently read comments, on this site and others, commenting that the bar exam was simply harder than usual. Specifically, I read many people, often law faculty (who didn't take the exam this year) or recent graduates (the vast majority of whom are taking the bar exam for the first time), insisting that the bar, especially the Multistate Bar Exam ("MBE"), is "harder" than before.

Let's set aside, for now, and briefly, (1) rampant speculation, (2) cognitive biases suggesting that the instance in which someone is taking a multiple choice test that counts for something feels "harder" than ungraded practice, (3) erroneous comparisons between the MBE and bar prep companies, (4) retroactive fitting of negative bar results with negative bar experiences, or (5) the use of comparatives in the absence of a comparison.

Let's instead focus on whether the July 2015 bar exam was "harder" than usual. The answer is, in all likelihood, no--at least, almost assuredly, not in the way most are suggesting, i.e., that the MBE was harder in such a way that it resulted in lower bar passage rates.

I'll explain why this is the right question, and why I include the caveat "almost assuredly," below. First, it might be beneficial to take a moment to explain how the MBE is scored, and how that should factor into an analysis.

I. What the MBE scale is

Many are familiar with a "curved" exam, either from college or law school. The MBE is not curved. (For that matter, neither is the LSAT.)

In college, letter grades are commonly assigned based on converting numeric scores to a letter (e.g., 90-100 is an A, 80-89 is a B, etc., with some gradations for +'s and -'s). A common way of curving the exam is to add points to the top grade in the class to make it 100, and add the same number of points to everyone else's score. If the highest grade on the exam is a 92, then everyone gets an additional 8 points. If the highest grade is a 98, everyone gets an additional 2 points (and most classmates complain that this student "wrecked the curve"). This isn't really a "curve" in the typical use of the term, but it's a common way of distributing grades.

Instead, most law schools "curve" grades based on a pre-determined distribution of grades. Consider the University of California-Irvine. In a class of, say, 80 students, instructors are required to give 3 or 4 A+'s; 19%-23% of the next highest grades are A's; 19-23% of the next highest grades are A-'s; and so on.

But the MBE uses neither of these. The MBE uses a process known as "equating," then "scales" the test. These are technical statistical measures, but here's what it's designed to do. (Let me introduce an important caveat here: the explanations are grossly oversimplified but contain the most basic explanations of measurement!)

Imagine we have two groups of students. They are taking a test, but on different days. And we don't want to give them the exact same test, because, well, that's a bad idea--the second group might get answers from the first group. But we want to be able to compare the two groups of students to each other.

It wouldn't really do to use our law school "curve" above. After all, what if the second group is much smarter than the first group? If we, say, had a 75% pass rate, why should the second group be penalized for taking the test among a much smarter group, when their chances would have been better the first time around?

Standardized testing needs a way of accounting for this. So it does something called equating. It uses versions of questions from previous administrations of the exam, known as "anchor" questions or "equators." It then uses these anchor questions to compare the two different groups. One can tell if the second group performed better, worse, or similarly on the anchor questions, which allows you to compare groups over time. It then examines how the second group did on the new questions. It can then better evaluate performance on those new questions by scaling the score based on the performance on the anchor questions.

This is why the bar jealously guards its exam questions and why there is such tight security around the exam. It needs some of the questions to compare groups from year to year. But as the law changes, or simply to keep the test relatively fresh, there are always new questions introduced into the exam.

II. How the MBE scale works

It's one thing to read the math--yes, you might think, there's some magic that standardized test administrators have, but it's still a challenge to understand. How does it work?

Consider two groups of similarly-situated test-takers, Group A and Group B. They each achieve the same score, 15 correct, on a batch of "equators." But Group A scores 21 correct on the unique questions, while Group B scores just 17 right.

We can feel fairly confident that Groups A and B are of similar ability. That's because they achieved the same score on the anchor questions, the equators that help us compare groups across test administrations.

And we can also feel fairly confident that Group B had a harder test than Group A. (Subject to a caveat discussed later in this part.) That's because we would expect Group B's scores to look like Group A's scores because they are of a similar capability. Because Group B performed worse on unique questions, it looks like they received a harder batch of questions.

The solution? We scale the answers so that Group B's 17 correct answers look like Group A's 21 correct answers. That accounts for the harder questions. Bar pass rates between Group A and Group B should look the same.

In short, then, it's irrelevant if Group B's test is harder. We'll adjust the results because we have a mechanism designed to account for variances in the difficulty of the test. Group B's pass rate will match Group A's pass rate because the equators establish that they are of similar ability.

When someone criticizes the MBE as being "harder," in order for that statement to have any relevance, that person must mean that it is "harder" in a way that caused lower scores; that is not the case in typical equating and scaling, as demonstrated in this example.

Let's instead look at a new group, Group C.

On the unique questions, Group C did worse than Group A (16 right as opposed to 21 right), much like Group B (17 to 21). But on the equators, the measure for comparing performance across tests, Group C also performed worse, 13 right instead of Group A's 15.

We can feel fairly confident, then, that Group C is of lesser ability than Group A. Their performance on the equators shows as much.

That also suggests that when Group C performed worse on unique questions than Group A, it was not because the questions were harder; it was because they were of lesser ability.

There are, of course, many more nuanced ways of measuring how different the groups are, examining the performance of individuals on each question, and so on. (For instance, what if Group C also got harder questions by an objective measure--as in, Group A would have scored the same score as Group C on the uniques if Group A answered Group C's uniques? How can we examine the unique questions independent of the equators, in the event that the uniques are actually harder or easier?) But this is a very crude way of identifying what the bar exam does. (For all of the sophisticated details, including how to weigh these things more specifically, read up on Item Response Theory.)

So, when the MBE scores decline, they decline because the group, as a whole, has performed worse than the previous group. And we can measure that by comparing their performance on similarly-situated questions.

III. Did something change test-taker performance on the MBE?

The only way to say that the failure rate increased because of the test would be because of a problem with the test itself. It might be, of course, that the NCBE created an error in its exam by using the wrong questions or scoring it incorrectly, but there hasn't been such an allegation and we have no evidence of that. (Of course, we have little evidence of anything at all, which may be an independent problem.)

But this year, some note, is the first year at the MBE includes seven multistate bar exam subjects instead of six. Civil Procedure was added as a seventh subject in the February 2015 exam.

Recall how equating works, however. We equate similar questions given to the same groups of test-takers. That means, Civil Procedure questions were not used to equate the scores. If the Civil Procedure questions were more challenging, or if students performed worse on those questions than others, we can always go back and see how they did on the anchor questions. Consider Groups A & B above: if they are similarly skilled test-takers, but Group B suffered worse scores on the uniques because of some defect in the Civil Procedure questions, then scaling will cure those differences, and Group B's scores will be scaled to reflect similar scores to Group A.

Instead, a "change" in the bar pass rates derived from the exam itself must affect how one performs on both the equators and the uniques.

The more nuanced claim is this: Civil Procedure is a seventh subject on the bar exam. Law students must now learn an additional subject for the MBE. Students have limited time and limited mental capacity to learn and retain this information. The seventh subject leads them to perform worse than they would have on the other six subjects. That, in turn, causes a decline in their performance on the equators relative to previous cohorts. And that causes an artificial decline in the score.

Maybe. But there are several factors suggesting (note, not necessarily definitively!) this is not the case.

First, this is not the first time the MBE has added a subject. In the mid-1970s, the 200-question MBE consisted of just five subjects: Contracts, Criminal Law, Evidence, Property, and Torts. (As a brief note, some of these subjects may have become easier; indeed, the adoption of the Federal Rules of Evidence simplified the questions at a time when the bulk of the evidentiary questions were based on common law. But, of course, there is more law today in many of these areas, and perhaps more complexity in some of them as a result.)

By the mid-1970s, the NCBE considered adding some combination of Civil Procedure, Constitutional Law, and Corporations to the MBE set. It ultimately settled on Constitutional Law, but not after significant opposition. Indeed, some even went so far as to suggest that it was not possible to draft objective-style multiple choice questions on Constitutional Law. (I've read through the archives of the Bar Examiner magazine from those days.) Nevertheless, the MBE plunged ahead and added a sixth subject in Constitutional Law. There was no great outcry about changes in bar pass rates or inabilities of students to handle a sixth subject; there was no dramatic decline in scores. Instead, Constitutional Law was, and now is, deemed a perfectly ordinary part of the MBE, with no complaints that the addition of this sixth subject proved overwhelming. The practice of adding a subject to the MBE is not unprecedented.

Furthermore, it's an overstatement to say that the MBE now includes a seventh subject when all bar exams (to my knowledge) previously tested Civil Procedure. Yes, the testing occurred in the essay components rather than the multiple choice components, but students were already studying for it, at least somewhat, anyway. And in many jurisdictions (e.g., California), it was federal Civil Procedure that was tested, not state-specific.

Finally, Civil Procedure is a substantial required course at (I believe) every law school in America--the same cannot be said, at the very least, of Evidence and of Constitutional Law. To the extent it's something students need to learn, they are, generally, already required to learn it for the bar, and they already have learned it in law school. (Retention or comprehension, of course, are other matters.)

These arguments are not definitive. It may well be the case that they are wrong and that Civil Procedure is a kind of disruptive subject sui generis. But it points to a larger issue that such arguments are largely speculation, ones that require more evidence before gaining confidence that Civil Procedure is (or is not) responsible, in any meaningful measure, to the lower MBE scores.

We do, however, have two external factors that predict the decline in MBE scores, and ones that suggest that the decline in student quality rather than a more challenging set of subjects is responsible for the decline in scores. First, law schools have been increasingly admitting--and, subsequently, increasingly graduating--students with lower credentials, including lower undergraduate grade point averages and lower LSAT scores. Jerry Organ has written extensively about this. We should expect declining bar pass rates as law schools continue to admit, and graduate, these students. (The degree of decline remains a subject of some debate, but a decline is to be expected.)

Second, the NCBE has observed a decline in MPRE scores. Early and more detailed responses from the NCBE revealed a relatively high correlation between MPRE and MBE scores. And because the MPRE is not subject to the same worries about changes in subject matter tested, its predictive value is beneficial to examine whether one would expect a particular outcome in the MBE.

IV. Some states are making the bar harder to pass--by raising the score needed to pass

Illinois, for instance, has announced that it will increase the score needed to pass the exam. When it adopted the Uniform Bar Exam, Montana decided to increase the score needed to pass.

These factors are unrelated to the changes in MBE scores. We might expect pass rates to decline. And we might attribute that decline to something other than the MBE scores.

And it actually raises a number of questions for those jurisdictions. Why is the pass score being increased? Why did the generation of lawyers who passed that bar and who were deemed competent, presumably, to practice law conclude that they needed to make it a greater challenge for Millennials? Is there evidence that a particular bar score is more or less effective at, say, excluding incompetent attorneys, or minimizing malpractice?

These are a few of the questions one might ask about why one may want a bar exam at all--its function, or its role as gatekeeper, and so on. And it's a question about the difficulty of passing the bar, which is a distinct inquiry from the question about the difficulty of the MBE questions themselves.

V. Concluding thoughts

Despite some hesitation or tentative conclusions offered, I'll restate something I began with: "Let's instead focus on whether the July 2015 bar exam was 'harder' than usual. The answer is, in all likelihood, no--at least, almost assuredly, not in the way most are suggesting, i.e., that the MBE was harder in such a way that it resulted in lower bar passage rates."

We can see that the MBE uses Item Response Theory to account for variances in the test difficulty, and the NCBE scales scores to ensure that harder or easier questions do not affect the outcome of the test. We can also see that merely adding a new subject, by itself, would not decrease scores. Instead, something would have to affect test-takers ability to an extent that it would make them perform worse on similar questions. And we have some good reasons to think (but, admittedly, not definitively, at least not yet) that Civil Procedure was not that cause; and some good reasons (from declining law school admissions standards on LSAT scores and UGPAs, and MPRE scores) to think that the decline is more related to the test-takers ability. More evidence and study is surely needed to sharpen the issues, but this post should clear up several points about MBE practice (in, admittedly, deeply, perhaps overly, simple terms).

Law schools ignore this to their peril. Blaming the exam without an understanding of how it actually operates masks the major structural issues confronting schools in their admissions and graduation policies. And it is almost assuredly going to get worse over each of the next three July administrations of the bar exam.