Visualizing the grim final numbers from the July 2014 bar exam

Most by now are undoubtedly aware about the significant decline in MBE scores and bar pass rates in the July 2014 bar exam. I've recently been persuaded (but not wholly) by NCBE explanations, suggesting that the July 2014 had generally worse predictors and performed worse as a result. If true, that suggests a grim reality as predictors worsen over the next several administrations.

I had some data earlier, cobbled together from state by state data sets using overall pass rates, suggesting, among other things, that the ExamSoft fiasco was not (primarily) responsible for the decline.

The NCBE has released its statistics for the 2014 administrations of bar exams. That means we have access to complete data sets, and to more precise data (e.g., first-time pass rates instead of overall pass rates). Below is a chart of changes in first-time bar pass rates among all 50 states and the District of Columbia between July 2013 and July 2014, with some color coding relating to the MBE and ExamSoft. Thoughts below.

As noted previously, the only non-MBE jurisdiction, Louisiana, saw a significant improvement in bar pass rates among first-time test-takers. So, too, did North Carolina--an MBE and ExamSoft jurisdiction with its essays on Tuesday. Congrats to the lucky test-takers in the Tar Heel State. Elsewhere, however, you see across-the-board declines among first-time test-takers, with a modest improvements in a few of jurisdictions.

It's wait and see for the July 2015 administration to determine whether this decline is the start of a trend or, perhaps, a one-off aberration.

California poised to cut bar exam from three days to two

Tomorrow, the Committee of Bar Examiners for the State of California meets to consider whether to cut the bar exam from three days to two days.

The proposal would result in one day of essays and one day of the MBE. The essays would include a morning of three, one-hour essays; and an afternoon of two, one-hour essays and a 90-minute performance test. As a practical matter, its most significant impact would be on the performance test, which has been a three-hour element of the exam. Each day would be weighed equally.

It would not make the exam any easier--that's a question left for the cutline for scores, which presumably would be recallibrated to reflect a comparable difficulty. Instead, it would make it less grueling for test-takers, and less expensive for all parties--one fewer day staying in a hotel, and one fewer day of material to develop and score. Further, it might speed grading, which, given California's glacial pace of scoring that postpones bar admission ceremonies into December after a student graduates in May, would benefit all parties.

The most intriguing component of the agenda item, in my view, describes the mismatch between critiques of proposed changes and the point of the exam itself:

There continues to be some confusion with regard to what the bar examination is intended to do. The examination is not designed to predict success as a lawyer or even that a lawyer is ready for the practice of law. In fact, one of the best predictors of bar examination scores is the grades an applicant received during law school. So, in one sense, the examination is confirmation that the necessary skills and knowledge were learned during the three or four years of law study, through whatever means, which are needed to show minimum competence as a lawyer. The bar examination is an examination to test minimum competence in the law.

The format of the exam, then, whether through essays or multiple choice, whether three days or two days, is not the point.

Implementation would be submitted for review in April 2015 to determine when the two-day bar, if approved, would first take place.

Rand Paul, Ben Cardin re-introduce ex-felon enfranchisement bills (with one glaring error)

Last year, I discussed Senator Rand Paul's proposal to enfranchise some ex-felons in some elections, and Senator Ben Cardin's broader proposal.

Both have reintroduced their bills this year: Mr. Paul's is S. 457, and Mr. Cardin's is S. 772. Neither changes a word, except for the years introduced.

That's disappointing at the basic level that my (obviously wise and salient) critiques were never addressed in the new drafts of the bill. That's largely, I suppose, because the drafts aren't "new," but simply recycled from last term--and, probably more significantly, because this little blog is more for sorting out my own thoughts than for rewriting federal legislation.

That said, both bills include a glaring error.

Recently I noted that Representative Jim Sensenbrenner's reintroduction of the Voting Rights Amendment Act contained no "substantive" changes. That said, it did include a procedural change: it amended all references to prior election law provisions to reflect their new home in Title 52.

Mr. Paul's and Mr. Cardin's bills, unfortunately, contain the old Title 42 references when referring to other election law provisions in the federal code.

Kudos to Mr. Sensenbrenner's staff for careful attention given to the reintroduction of his election law bill.

"Next round in LSAT disability fight"

Last year, I blogged twice about an agreement between the Law School Admissions Council and the Department of Justice regarding accommodated LSAT test-takers. In the future, LSAC agrees to stop "flagging" accommodated takers and to ensure additional opportunities for accommodated test-taking. Among other things, I noted:

LSAC wants to provide scores highly predictive of first-year law school grades. On that, it does a very good job--it is the best predictor of first-year grades; it is an even better predictor when combined (with an appropriate formula) with an undergraduate GPA. But the settlement means that LSAC must now provide both these scores, and scores that are less predictive (i.e., accommodated scores, which are not as predictive of first-year law school grades), without any indication to law schools about whether this score fits into one category or into another.

Now comes this piece by Karen Sloan in the National Law Journal. Details of the agreement have yielded disputes, including, unsurprisingly, this:

A spokeswoman declined to detail the council's objections, but issued a written statement citing potential damage to the test's ability to accurately predict who will succeed in law school. "We want to reiterate that we deeply respect the rights of disabled test-takers, but we cannot ignore the impact that certain of the recommendations would have on the overall integrity and fairness of the LSAT accommodation process," the council said.

There's much more to the story from several perspectives. But this crucial issue was, of course, entirely foreseeable.

Scholarship highlight: Katyal & Clement, On the Meaning of Natural Born Citizen

Former Solicitors General Neal Katyal and Paul Clement have this commentary in the Harvard Law Review Forum, On the Meaning of "Natural Born Citizen." It opens:

We have both had the privilege of heading the Office of the Solicitor General during different administrations. We may have different ideas about the ideal candidate in the next presidential election, but we agree on one important principle: voters should be able to choose from all constitutionally eligible candidates, free from spurious arguments that a U.S. citizen at birth is somehow not constitutionally eligible to serve as President simply because he was delivered at a hospital abroad.

The article nicely summarizes the reasons in defense of this interpretation (specifically and especially the Ted Cruz question). It also nicely follows my recent piece examining the antecedent question, whether States have any independent power to evaluate qualifications for federal office. (Download on SSRN for the details!)

The slow, steady decline of the LSAT

Imagine you had a tool to predict the future. You'd probably use it. A lot, in fact, especially if that tool predicted success in your industry.

Then, one day, you abruptly stop using that tool. It would probably mean some combination of the following: a better tool for predicting success; a decline in quality of that tool; some significant negative side effect from using that tool; a lack of concern for learning the predictive value offered by that tool; or an alternative advantage that might be gained only if the tool is not used.

For the LSAT, the latter four reasons have illustrated the slow, steady decline of its use.

A decline in the quality of that tool

The LSAT has long been deemed an extremely reliable test. Reliable, in that it highly and consistently correlates with first-year law school grade point averages. (For numerous studies, see the LSAC reports.) It uses item response theory, which allows the scores to reflect similar quality over time--a 170 on each test looks roughly the same, regardless of the month or year in which the test is taken.

The LSAT is even better when combined with a prospective law student's undergraduate GPA. And, if a school so desires, it can obtain a formula from LSAC indicating an optimal "index formula" that weighs LSAT and UGPA appropriately to find the best fit for a law school's first year grading distribution.

The LSAT, however, has lost some of this quality.

For many years, schools generally disclosed and relied upon the average of LSAT scores from a single applicant. LSAT studies, after all, revealed that the average is the most predictive of the applicant's ability, not the high or the low score. In 2006, however, the American Bar Association decided to request that schools report the high scores, not the average scores, of applicants. Despite the lower predictive value of reporting the high score, schools have increasingly pursued these high-end scores.

Additionally, the LSAC recently entered a consent decree to stop flagging LSAT scores earned through accommodated test-taking, and making it easier to secure accommodated test-taking. Because LSAC only finds that its scores secured during ordinary conditions are reliable, the consent decree means that the LSAT scores that schools obtain will have lower value.

Some significant negative side effect from using that tool

When U.S. News & World Report calculates its rankings of law schools, one-eighth of its entire score is based on a single LSAT score: the median incoming student. This creates significant distortions in how law schools secure incoming classes. Schools pursue that median LSAT score, despite the more promising index score it might otherwise use. Even more troubling, LSAT takers are fewer and fewer, making scores more difficult to obtain.

As a result, schools have an incentive to avoid the negative side effect from declines in their LSAT median, which might result in a decline in their USNWR rank. And so, as reported in recent reports, schools have begun to admit a non-trivial number of students without that score. Really, the new trend is not new, but several years old--instead, it's a trend begun by new interpretations of regulations that permit alternative metrics, such as SAT or ACT scores, to evaluate incoming students.

Of course, there's no data indicating the reliability of SAT or ACT scores correlating with first year grades, or how to index those scores with undergraduate GPA for an even more reliable picture. But the negative externality--the risk of median declines and a corresponding USNWR hit--is too great a cost. (You'll note, then, that the use of SAT or ACT scores is not, as one might say, a "better tool for predicting success." It is not a tested method at all.)

A lack of concern for learning the predictive value offered by that tool

It might have been the case that the LSAT was valued by admissions departments because it was a way of guessing success. Better students would be at a lower risk of dropping out or failing out. Better students would have a better chance at passing the bar and earning desirable employment outcomes.

But if those metrics are less valuable than other concerns--such as today's LSAT profile for an incoming class over the profile of a graduating class in three years or its employment profile in four years--then schools push them aside. It's not that schools are unconcerned with first-year student success-they undoubtedly are. It is simply that such concerns necessarily lessen if the obsession over an LSAT median--rather than the depth of the class, given the abrupt decline in the 25th percentile at many schools--is heightened.

An alternative advantage that might be gained only if the tool is not used

These are, of course, rather rankings-centric views. But there's also an advantage to be gained in refusing to use LSAT scores for prospective students. If a school is one of the only, or one of the few, doing so, it is a very strong enticement for the, let's face it, lazy prospective law student: forgo taking the LSAT, forgo opportunities at most other law schools in America, and effectively commit to a school without an LSAT requirement (assuming other metrics, like GPA and a "comparable" SAT or ACT score, have been met).

It's a decisive recruiting advantage, particularly for a law school seeking to attract candidates from its home undergraduate institution, a baked-in base likely inclined to attend the same law school anyway. Sure, students lose options elsewhere, but they save the time and financial cost of LSAT preparation and agony. It might be, of course, that this incentivizes all of the wrong sorts of students, but that might be a matter of perspective, depending on whether one views the LSAT as an unnecessary hoop or an objective measure of likely future performance.

*

The LSAT, then, is not abruptly dying. It has been experiencing nicks and scrapes for a decade now, and an increasing number of factors, both internal to LSAC and external to the market for legal education, have put it in a precarious position of slow and steady decline.

Pregaming the U.S. News 2016 law school rankings

Despite the pernicious effect U.S. News & World Report law school rankings has upon legal education, it remains the most trusted resource for 21-year-olds seeking to set a course for the rest of their lives. Before the rankings are released next week, I thought I'd reshare three rankings I released when the Class of 2013 employment data was released (which will be incorporated in the forthcoming USNWR rankings). Law schools will likely report their Class of 2014 employment data in mid-April 2015.

Legal employment outcomes in California in 2013 (March 31, 2014)

Legal employment outcomes in 2013 (April 11, 2014)

Law school microranking: federal judicial clerkship placement, 2011-2013 (May 6, 2014)