Some good news, and some perspective, on the June 2017 LSAT

The Law School Admissions Council recently shared that LSATs administered increased significantly year-over-year: a 19.8% increase. In historical terms, the June 2017 test had more test-takers than any June since 2010, which had a whopping 23,973 (the last year of the "boom" cycle for law school admissions). That's good news for law schools looking to fill their classes with more students, and, hopefully, more qualified students. I've visualized the last decade of June LSAT administrations below.

Of course, there are many more steps along the way to a better Class of 2021: applicant quality among those test-takers, whether they turn into actual applicants, etc. And given the potential for schools to accept the GRE instead of the LSAT, LSAT administrations may slightly understate expectations for future applicants.

But one data point is worth consider, and that's repeat test-takers. LSAC discloses that data in some more opaque ways, but it's worth considering how many first-time test-takers were among the June 2017 test-takers.

First-time test-takers are a better picture of the likely changes to the quality and quantity of the applicant pool. Repeaters are permitted to use their highest score, which is a worse indicator of their quality. (They may now retake the test an unlimited number of times.) Additionally, first-time test-takers represent truly potentially new applicants, as opposed to repeaters who were probably already inclined to apply (or perhaps have applied and are seeking better scholarship offers).

Repeat test-takers have been slowly on the rise, as the graphic above (barely!) demonstrates. First-time test-takers made up 84.9% of the June 2007 LSAT administration. That number has eroded every June since, and this June saw first-time test-takers make up 74% of the administration. About 27,600 took the test, and 20,430 for the first time; compare that to June 2011, when there were fewer test-takers (about 26,800), but more who took it for the first time (21,610).

There is some good optimism for law schools looking for a boost in their admissions figures. But there's also a little perspective to consider about what these figures actually represent.

"The Kobach fallout on election security"

I have a guest post at Rick Hasen's Election Law Blog. It begins:

The Presidential Advisory Commission on Election Integrity offered its first public request this week, as Vice Chair and Kansas Secretary of State Kris Kobach requested voter information from every state. That single request has likely done long-lasting damage to the political ability of the federal government to regulate elections. In particular, any chance that meaningful election security issues would be addressed at the federal level before 2020 worsened dramatically this week.

The request is sloppy, as Charles Stewart carefully noted, and, at least in some cases, forbidden under state law. The letter was sent to the wrong administrators in some states, it requests data like “publicly-available . . . last four digits of social security number if available” (which should never be permissible), and it fails to follow the proper protocol in each state to request such data.

Response from state officials has been swift and generally opposed. It has been bipartisan, ranging from politically-charged outrage, to drier statements about what state disclosure law permits and (more often) forbids.

But the opposition reflects a major undercurrent from the states to the federal government: we run elections, not you.

Puerto Rican statehood and the effect on Congress and the Electoral College

After the low-turnout, high-pro-statehood referendum in Puerto Rico last weekend, despite the low likelihood of it becoming a state, it's worth considering the impact that statehood might have in representation and elections.

Puerto Rico would receive two Senators, increasing the size of the Senate to 102.

Census estimates project that Puerto Rico would send five members to the House. Since 1929, the House has not expanded in size, so it would mean that Puerto Rico's delegation would come at the expense of other states' delegations. In 1959, however, with the admission of Hawaii and Alaska, Congress temporarily increased in size from 435 members to 437, then dropped back down to 435 after the 1960 Census and reapportionment. Congress might do something similar with Puerto Rico upon statehood. (For some thoughts about doubling the size of the House, see my post on the Electoral College.)

Based on projections for 2020, Puerto Rico's five seats would likely come at the expense of one seat each from California, Montana, New York, Pennsylvania, and Texas. (It's worth noting these are based on the 2020 projections; Montana is likely to receive a second representative after the 2020 reapportionment.)

This would also mean that in presidential elections, Puerto Rico would have 7 electoral votes, and these five states would each lose an electoral vote. The electoral vote total would be 540, and it would take 271 votes to win.

Virgin Islands Supreme Court embroiled in another candidate qualifications disputes

A few year ago, I blogged about a rather extraordinary series of cases from the Virgin Islands concerning candidate qualifications. A candidate previously convicted of tax evasion was kept off the ballot by the Virgin Islands Supreme Court. A federal court ordered otherwise. The Virgin Islands Supreme Court ignored it.

A similar dispute has arisen recently. Kevin Rodriquez was elected to the legislature, but some claimed he was not a three-year resident of the Virgin Islands, which meant that he could not serve in the legislature. After litigation, the Virgin Islands Supreme Court ordered that he not be seated. The Third Circuit last week reversed. In doing so, it approvingly cited the power of the Board of Elections to judge qualifications before the election (a dubious proposition, as my earlier posts have noted).

The continued steady decline of the LSAT

In 2015, I wrote a post called "The slow, steady decline of the LSAT." I described a number of problems that have arising in the LSAT--problems partially of the making of LSAC, which administers the test. LSAC (and the ABA, and USNWR) count the highest prospective law student's LSAT score--even though the average of scores is a more accurate predictor of success. LSAC entered a consent decree to refuse to flag accommodated test-takers, even though it conceded its test was only reliable under ordinary test-taking conditions. Schools began to avoid using the LSAT in admitting some students for USNWR purposes to improve their medians. Schools also obsessed over the LSAT median,e ven though index scores were a more reliable predictor of success, and even as 25th percentile--and lower--admitted students dropped at a faster rate, imperiling future success on the bar exam.

In the last two years, the LSAT has continued to decline.

First, schools have started to turn to the GRE in lieu of the LSAT. It's not for USNWR purposes, because USNWR factors in GRE score into its LSAT equivalent. Instead, it's because the GRE is a general exam, and the LSAT is a specific exam. And if there's little different between what the tests are measuring, why not permit people taking the more general exam considering a broader array of graduate programs to apply to law school? Admitted, perhaps the reliability of the GRE is more of an open question left for another day--but I would suspect that if law schools needed to rely on SAT scores, it wouldn't be dramatically worse than relying on LSAT scores; and I imagine we'll see some studies in the near future regarding the reliability of using GRE scores.

Second, LSAC has become bizarrely defensive of its test. To the extent it intends to go to war with law schools over its own test--and go to war in ways that are not terribly logical--it does so at its own peril.

Third, prospective law student may now retake the LSAT an unlimited number of times. Previously, test-takers were limited to 3 attempts in 2 years (that is, 8 administrations of the test); they would need special permission to retake more than that. Given the fact that schools only need to report the highest score--and given the fact that the highest score is less reliable than the average of scores--we can expect the value of the LSAT to decline to a still-greater degree.

Fourth, LSAC will now administer the LSAT 6 times a year instead of 4 times a year. The linked article offers understandable justifications--greater flexibility given the GRE's flexibility, more opportunities given the less-rigid law school admissions cycle, and so on. But given the unlimited number of opportunities to retake, plus the highest-score standard, we can expect, again, a still-greater decline in value of an LSAT score.

Many of the problems I've identified here are principally driven by one concern: the USNWR rankings. Without them, enterprising (and risk-taking) law schools might consider only the average, or only the first two or three attempts, or consider the index score to a greater degree, or weight the quality of the undergraduate institution and difficulty of the undergraduate major to a greater degree.

But USNWR rankings--which report the median LSAT score as a whopping one-eighth of the total rankings formula--continue to drive admissions decisions. As the LSAT declines in value, it places many schools in an increasingly untenable position--rely upon the increasingly-flawed metrics of the LSAT, or succumb to a USNWR ratings decline.

Draft work in progress: "The High Cost of Lowering the Bar"

My colleague Rob Anderson and I have posted a draft article, The High Cost of Lowering the Bar on SSRN. From the abstract:

In this Essay, we present data suggesting that lowering the bar examination passing score will likely increase the amount of malpractice, misconduct, and discipline among California lawyers. Our analysis shows that bar exam score is significantly related to likelihood of State Bar discipline throughout a lawyer’s career. We investigate these claims by collecting data on disciplinary actions and disbarments among California-licensed attorneys. We find support for the assertion that attorneys with lower bar examination performance are more likely to be disciplined and disbarred than those with higher performance.

Although our measures of bar performance only have modest predictive power of subsequent discipline, we project that lowering the cut score would result in the admission of attorneys with a substantially higher probability of State Bar discipline over the course of their careers. But we admit that our analysis is limited due to the imperfect data available to the public. For a precise calculation, we call on the California State Bar to use its internal records on bar scores and discipline outcomes to determine the likely impact of changes to the passing score.

We were inspired by the lack of evidence surrounding costs that may be associated with lowering the "cut score" required to pass the California bar, and we offered this small study as one data point toward that end. The Wall Street Journal cited the draft this week, and we've received valuable feedback from a number of people. We welcome more feedback! (We also welcome publication offers!)

The paper really does two things--identifies the likelihood of discipline associated with the bar exam score, and calls on the State Bar to engage in more precise data collection and analysis when evaluating the costs and benefits of changing the cut score.

It emphatically does not do several things. For instance, it does not identify causation and identifies a number of possible reasons for the disparity (at pp. 12-13 of the draft). Additionally, it simply identifies a cost--lower the cut score will likely increase attorneys subject to discipline. It does not make any effort to weigh that cost--it may well be the case that the State Bar views the cost as acceptable given the trade-off of benefits (e.g., more attorneys, more access to justice, etc.) (see pp. 11-12 of the draft). Or it might be the case that the occupational licensing of the state bar and the risk of attorney discipline should not hinge on correlation measures like bar exam score.

There are many, for instance, who have been thoughtfully critically of the bar exam and would likely agree that our findings are accurate but reject that they should be insurmountable costs. Consider thoughtful commentary from Professor Deborah Jones Merritt at the Law School Cafe, who has long had careful and substantive critiques about the use of the bar exam generally.

It has been our hope that these costs are addressed in a meaningful, substantial, and productive way. We include many caveats in our findings for that reason.

Unfortunately, not everyone has reacted to this draft that way.

The Daily Journal (print only) solicited feedback on the work with a couple of salient quotations. First:

Bar Trustee Joanna Mendoza said she agreed the study should not be relied on for policy decisions.

“I am not persuaded by the study since the professors did not have the data available to prove their hypothesis,” she said.

We feel confident in our modest hypothesis--that attorneys with lower bar exam scores are subject to higher rates of discipline. We use two methods to support this. We do not have individualized data that would allow us the precision of measuring the precise effect, but we are confident in this major hypothesis.

Worse, however, is the disappointing answer. Our draft expressly calls on the State Bar to study the data! While we can only roughly address the impact at the macro level, we call on the bar to use data for more precise information! We do hope that the California State Bar would do so. But it appears it will not--at least, not unless it has already planned on doing so:

Bar spokeswoman Laura Ernde did not directly address questions about the Pepperdine professors’ study or their call for the bar to review its internal data, including non-public discipline. Ernde wrote in an email that the agency would use its ongoing studies to make recommendations to the Supreme Court about the bar exam.

Second are the remarks from David L. Faigman, dean of the University of California Hastings College of Law. Dean Faigman has been one of the most vocal advocates for lowering the cut score (consider this Los Angeles Times opinion piece.) His response:

Among his many critiques, Faigman said the professors failed to factor in a number of variables that impact whether an attorney is disciplined. 

“If they were to publish it in its current form, it would be about as irresponsible a product of empirical scholarship I could imagine putting out for public consumption,” Faigman said. “God forbid anybody of policy authority should rely on that manuscript.”

It's hard to know how to address a critique when the epithet "irresponsible" is the substance of the critique.

We concede many variables that may cause attorney discipline (pp. 12-13), and the paper makes no attempt to address that. Instead, we're pointing out that lower bar scores correlate with higher discipline rates; and lowering the score further would likely result in still higher discipline rates. Yes, many factors go into discipline--but the consequence of lowering the cut score will still remain, a consequence of higher discipline.

And our call for policy authorities to "rely" on the manuscript is twofold--to consider that there are actual costs to lowering the cut score, and to use more data to more carefully evaluate those costs. Both, I think, are valuable things for a policy authority to "rely" upon.

We hope that the paper sparks a more nuanced and thoughtful discussion than the one that has been waged in lobbying the State Bar and state legislature so far. We hardly know what the "right" cut score is, or the full range of costs and benefits that arise at varying changes to the cut score of the bar exam. But we hope decisionmakers patiently and seriously engage with these costs and benefits in the months--and, perhaps ideally, years--ahead.

Does the bar exam adequately test prospective lawyers' minimum competence?

The critiques of the bar exam have grown louder over the last few years on the heels of declining bar pass rates. But the most popular critiques have changed somewhat. It used to be that external factors--such as the ExamSoft debacle--were a target. Then came charges that the bar exam was harder than usual. But the most recent charges are actually quite a longstanding critique of the bar exam--it simply isn't a good measure of prospective lawyers' "minimum competence."

The bar has attempted to adjust in the last fifty years. Many states now have a "performance test," a component designed to simulate what lawyers do--test-takers are given some law and some facts and asked to address the problem with a legal task. That said, performance tests moderately correlate with other elements of the bar exam and perhaps are not performing the function some hoped they would serve.

Regardless, critiques of the bar exam are longstanding, and some of the most popular critiques look something like this: why did a state, like California, pick this score as a passing score for "minimum competence"? And why is the bar exam any good at testing the kinds of things that lawyers actually do? The bar exam is a three-day (in California, beginning this July, two-day), closed book test with multiple choice and timed essay questions that in no way resembles the real world of law practice. Why should we trust this test?

It's a fair point, and it's one best met with a question: what ought the bar test? And, perhaps a more subtle question: what if it turns out that the answer to what the bar ought to test actually aligns quite closely with the results from the existing bar exam?

A study in 1980 in California is one of the most impressive I've seen on this subject. And while it's a little old, it's the kind of thing that ought to be replicated before state bars go about making dramatic changes to their exams or scoring methods. I'll narrate what happened there. (For details, consider two reports on the study and the testimony presented to California lawmakers asking the exact same questions in 1984, after the particularly poor performance of applicants to the state bar on the July 1983 bar exam--a historically low score essentially matched in the July 2016 administration.)

After the July 1980 bar exam in California, the National Conference of Bar Examiners teamed up with the California Committee of Bar Examiners to run a study. They selected 485 applicants to the bar who had taken the July 1980 exam. Each of these applicants took an additional two-day test in August 1980.

The two-day test required participants to "function as counsel for the plaintiff in a simulated case" on one day, and "counsel for the defendant in a different simulated case" the other day. Actors played clients and witnesses. The participants were given oral and written tasks--client interviews, discovery plans, briefs, memoranda, opening statements, cross-examination, and the like. They were then evaluated among a number of dimensions and scored.

In the end, the scores were correlated to the applicants' bar exam scores. The relationship between the scores and the general bar exam scores were fairly strong--"about as strong as the underlying relationship between the Essay and MBE section of the [General Bar Exam]." "In short," the study concluded, the study and the bar exam "appear to be measuring similar but not identical abilities."

Additionally, a panel of 25 lawyers spent more than two days with extended in-depth evaluation of 18 of these participants. The panelists were clinical professors, law professors, attorneys, judges, and others with a variety of experience. The panelists were asked to evaluate these 18 participants' performance among the various dimensions along a scale of "very unsatisfactory" (i.e., fail) to "borderline" to "very satisfactory" (i.e., pass). The panel's judgments about the pass/fail line was consistent with the line where it was drawn on the California bar exam (with the caveat that this was a sample of just 18 applicants).

It might be that there are different things we ought to be testing, or that this experiment has its own limitations (again, I encourage you to read it if you're interested in the details). But before anything is done about the bar exam, it might be worth spending some time thinking about how we can evaluate what we think ought to be evaluated--and recognize that there are decades of studies addressing very similar things that we may ignore to our peril.

Whittier's challenges may have been unique to California

On the heels of my analysis of the challenges facing Whittier, I starting thinking about how Whittier compared with a great many other law schools in the country that are facing the same challenges--a shrinking law school applicant pool, declining quality of applicants, continued challenges in bar exam pass rates and graduate employment statistics. Whittier's incoming class profile isn't unique. What makes its situation different from other schools?

The answer, I think, lies in California, along three dimensions--state bar cut scores, transfers, and employment.

I recently read a law professor suggest that Whittier was making a significant mistake closing because it was located in Orange County, California, a place that would experience great demand for legal services in the near future. I tend to find just the opposite--if Whittier were dropped into just about any of the other 49 states in the country, it likely would not be facing the same pressures it faces in its current location. (This is, of course, not to say that it wouldn't be facing the same kinds of pressures in legal education generally, but that its problems are exacerbated in California.)

I looked at the incoming class profiles from 2013 and picked 11 other schools that closely matched Whittier's overall incoming LSAT profile.

School Name Matriculants 75th LSAT 50th LSAT 25th LSAT
Atlanta's John Marshall Law School 235 152 149 146
John Marshall Law School 404 152 149 146
Mississippi College 159 153 149 145
New England Law | Boston 238 153 149 145
Nova Southeastern University 305 152 149 146
Oklahoma City University 162 153 149 145
Suffolk University 450 153 149 145
University of Massachusetts Dartmouth 78 151 148 145
University of North Dakota 83 153 148 145
Western New England University 120 152 149 145
Whittier Law School 221 152 149 145
Widener-Commonwealth 74 151 148 145

First, I looked at the first-time bar pass rate for the July 2016 bar, with each state's cut score and the state's overall first-time pass rate among graduates of ABA-accredited schools. (As of this writing, neither the Mississippi state bar nor Mississippi College have disclosed school-specific bar pass rates yet.)

School Name State Cut score July 2016 Pass Rate Statewide Pass Rate
Mississippi College MS 132 * 75%
Widener-Commonwealth PA 136 79% 75%
Western New England University MA 135 74% 81%
New England Law | Boston MA 135 73% 81%
University of North Dakota ND 130 73% 73%
Suffolk University MA 135 70% 81%
University of Massachusetts Dartmouth MA 135 69% 81%
Oklahoma City University OK 132 67% 75%
John Marshall Law School IL 133 65% 77%
Nova Southeastern University FL 136 63% 68%
Atlanta's John Marshall Law School GA 135 43% 73%
Whittier Law School CA 144 22% 62%

A Pepperdine colleague blogged last year that if Whittier were in New York, it would likely have had a 51% first-time pass rate instead of a 22% pass rate. New York's cut score is relatively low--a 133.  (Whittier's average combined California bar score for first-time test-takers in July 2016 was a 135.5, above 133 and well below California's 144.) If Whittier were in Massachusetts or Georgia, it might have had something near 51%. If it were in Mississippi or North Dakota, its pass rate may have approached 60%. A first-time pass rate of 3 in 5 is still not something to be happy about, but it's a far cry from a first-time rate around just 1 in 5.

It isn't that some of these schools figured out how to help their students to pass the bar and that Whittier lagged; it's that California's high cut score makes it more difficult to pass the bar than if Whittier grads had taken the bar in almost any other state. (This isn't to say that a higher or a lower cut score is better or worse; it's simply to describe the situation that California schools face compared to others.)

Second, a factor in bar pass rate includes the loss of high-performing students as transfers elsewhere. I looked at transfer rates among these schools in 2014, a loss of students who matriculated to the school in 2013.

School Name Transfers Out Pct Transfers Out
Atlanta's John Marshall Law School 19% 45
Nova Southeastern University 13% 41
Whittier Law School 13% 28
John Marshall Law School 12% 50
New England Law | Boston 11% 26
University of Massachusetts Dartmouth 10% 8
Western New England University 8% 10
Suffolk University 8% 37
University of North Dakota 5% 4
Oklahoma City University 3% 5
Widener-Commonwealth 3% 2
Mississippi College 3% 4

It may come as little surprise that larger states with many competitive schools that shrunk the incoming class sizes to preserve their LSAT and UGPA medians tended to rely on transfers to help backfill their classes. Atlanta's John Marshall lost 20 students to Emory, 10 to Georgia State, and 5 to Mercer; Nova lost 16 to Miami and 5 to Florida State; and Whittier lost 15 to Loyola-Los Angeles. These schools lost a number of their best students, and, unsurprisingly, had some of the worst bar outcomes among this cohort. (Four of these schools are in Massachusetts, and perhaps no single school attracts the bulk of transfer attention; and schools in less competitive states like Mississippi, North Dakota, and Oklahoma experienced insignificant attrition.)

Third, Whittier had low job placement in full-time, long-term, bar passage-required and J.D.-advantage positions, but it's a reflection of the fact that the placement rate of California schools lags most of the rest of the country. (It's also exacerbated by the low bar pass rate.) Consider each school's placement in FTLT BPR & JDA positions, and the statewide placement rate into such (unfunded) jobs.

School Name FTLT BPR+JDA Statewide emp Delta
Mississippi College 76% 72% 4
Oklahoma City University 73% 77% -4
Widener-Commonwealth 68% 83% -15
Suffolk University 66% 79% -13
John Marshall Law School 66% 77% -11
New England Law | Boston 60% 79% -19
Atlanta's John Marshall Law School 58% 77% -19
Nova Southeastern University 58% 65% -7
Western New England University 57% 79% -22
University of North Dakota 57% 57% 0
University of Massachusetts Dartmouth 55% 79% -24
Whittier Law School 39% 64% -25

Whittier lags in placement here, too, but in part because California has unusually low placement. (North Dakota, a state with just one flagship law school, serves as an outlier.) This is not a total defense of a particular school's outcomes, either--Florida also appears to have a relatively high number of law school graduates, and its employment rate shows similar challenges. But coupled with Whittier's low bar passage rate, one can see why securing students in positions, particularly "bar passage required" positions, would be even more difficult. Several other schools show employment rates that are 19 to 24 points behind the state average. (School like Oklahoma City, one of three; Mississippi College, one of two; and North Dakota, the only school in the state, may distort these comparisons somewhat.)

I then read another piece from a graduate from the 1970s lamenting that Whittier had "lost its way" in training graduates ready to take the bar. I pointed out Whittier's challenges were hardly recent, as the ABA had placed Whittier on probation in 2005, which lead to efforts that bolstered Whittier's first-time bar pass rate in California past 84%.

But it's worth looking back to the 1970s, when Whittier first sought accreditation, to consider its situation and aspirations. Here's an excerpt from the Los Angeles Times in 1978 when Whittier received ABA accreditation:

In 1978, a 550 was around the 51st percentile of LSAT scores--something like a 151 today. Its tuition in 1974 was $1200 per year, or around $6000 per year in 2017 dollars. It was on pace to increase to $2900 per year in just four years, or about $11,000 in 2017 dollars. There are obviously significant benefits that arise from becoming an ABA-accredited law school. But there are also costs with accreditation--and I'm not sure that a law school with tuition levels at $11,000 a year would be facing the same kinds of pressures among selecting prospective students.

I don't pretend to understand the dynamics of legal education in California in the last 40 years, with more than 20 ABA-accredited law schools and a number of California accredited and unaccredited schools. But I do think some context about the California market suggests that some of the problems Whittier faced were exacerbated by the California market in particular.