Incumbent protection likely biggest effect of California moving presidential primary to March

California's SB 568, which has been sent to Governor Jerry Brown for his signature, would move the presidential primaries from June to the first Tuesday after the first Monday of March.

There’s an old saying applied to many business decisions reflecting the tradeoffs that must be made: “Fast, good, or cheap—pick two.” For presidential primaries in California, the saying might be modified: “Competitive, influential, or cheap—pick two.” The California legislature is trying to plan a presidential primary that is both cheap and influential, but doing so would make most elections in California less competitive. At its outer bounds, that may be unconstitutional, but the answer on this question is far from clear.

California had for some time held its presidential primaries in March. In 2008, it pushed that primary back to February and was part of a glut of states that held primaries on a “Super Tuesday.” But California voters didn’t exercise outsized influence because so many other states were holding primaries the same day. And a primary that early was costly—it cost about $100 million to hold that primary, and voters would still have to go to the polls twice more, once for congressional and state primaries in the summer and once for the general election in November.

Rather than burdening voters with three trips to the polls in one year, California consolidated the presidential primary with its June state primaries. That saved money in the state budget, too. But it came at a political cost—one of influence. By June, there is little influence left for California voters in a presidential primary. In 2016, for instance, Donald Trump and Hillary Clinton had all but secured their parties’ nominations.

Of course, by the first week in March, many candidates have already dropped out of the race after Iowa, New Hampshire, South Carolina, and Nevada have voted. But, the opportunity to influence the selection of the presidential candidate is certainly at least somewhat greater in March than June.

Then came a couple of complications. A March presidential primary would return to a third election in that year. A concern is voter fatigue, but the greater concern for California is another nine-figure election. So the legislature chose to push all primaries back to March in presidential years. That yielded some uncertainty in non-presidential years and might confuse voters or cause irregularities by having primaries back in June for those off-cycle years, so the legislature then chose to put all primaries in March.

Just a handful of states in 2016 had congressional primaries in March: as far as I could discern, Alabama, Arkansas, Illinois, Mississippi, Ohio, and Texas were the only states that hold congressional primaries that early. Many jurisdictions hold primaries much closer to the general election, often in September.

There are good reasons for later primaries. They give potential candidates a longer opportunity to consider challenging an incumbent or entering the race for an open seat. They also allow voters to consider more political information about a candidate, particularly an incumbent, before voting.

A March primary in California, however, means that challengers must file by the December before, and enter the race (and begin collecting signatures) well before that. For a two-year House race, that's a very long lead time. Granted, in many contemporary cases, candidates for office frequently announce their candidates well before this time period. But that is out of choice, not necessity.

It also has the effect of insulating incumbents. Incumbents will have much more limited political accountability if candidates must file so early. If an incumbent sees no serious competitors, that incumbent may feel sufficiently insulated and politically unaccountable to act without regard to voters' preferences. The earlier the field is set, the more confident the incumbent can be, either at the filing deadline in December or after the primary in March.

It can have very practical effects. Assuming the law took effect for 2018, for instance, a sitting member of the House could shoot someone at the Rose Bowl on New Year's Day in 2018, but might not face any new competitors in the March primary or the November election. Competitors could only enter the race for 2020. It's a practical effect that redounds to the benefit of incumbents.

A further complicating factor is California’s “top two” primary. The top two voter-getters in the March primary will face off in the November general election. That might be two candidates from the same party, or a fairly marginal candidate in a race without much likelihood that the incumbent would lose, further ossifying the effects of an early primary and insulating the incumbent.

Here's where the constitutional element comes into play. In Anderson v. Celebrezze in 1983, the Supreme Court concluded that a March filing deadline for a November presidential election was too severe a burden, too stringent a ballot access requirement, to withstand constitutional scrutiny. The case included some qualifications about one state impacting a presidential election, which may, in turn, limit its value in applying the precedent in quite the same way with congressional or state offices.

But, importantly, California's top-two system limits opportunities in ways these other states with early filing deadlines don't have. Because other states may permit independent candidates to secure ballot access much closer in time to the election (because they aren't participating in a primary), there are more opportunities than in California, which will require filing in December the year before an election.

The Ninth Circuit in Washington State Republican Party v. Washington State Grange in 2012 approved of the burdens on minor-party candidates in Washington's top-two system, but emphasized that the "primary is in August, not March." And that was a concern raised by the Libertarian Party, not an independent candidate (i.e., one who was not seeking a nomination from a party).

We shall see if anyone raises a sufficient constitutional challenge to this early primary. But it's worth emphasizing that the constitutional issues, while present, are only one concern. The more significant, practical concern remains, in my view, the increased insulation of incumbents who seek reelection.

How a change in the bar exam cut score could alter California legal education

Virtually all the deans of law schools in California, of ABA-accredited and California-accredited schools, have come out in favor, at multiple stages, of lowering the cut score for the California bar exam. The score, 144, is the second-highest in the country and has long been this high. Given the size of California and the number of test-takers each year, even modest changes could result in hundreds of new first-time passers each test administration.

The State Bar, in a narrowly-divided 6-5 vote, recommended three options to the California Supreme Court: keep the score; lower it to 141.1; or lower it to 139. As I watched the hearing, the dissenters seemed more in favor of keeping it at 144. At least some of the supporters seem inclined to support the 139 score, or something even lower, but recognized the limitations of securing a majority vote on an issue. Essentially, however, the State Bar adopted the staff recommendation and offered these options to the California Supreme Court.

The Court could adopt none of these options, but I imagine it would be inclined to adopt a recommended standard, and probably the lowest standard at that, 139. (The link above includes the call from the Supreme Court to evaluate the appropriateness of the cut score, a hint, but hardly definitive, that it believes something ought to be done.)

What surprised me, however, is that there would be such unanimity among law deans, because the impact on legal education could be quite significant--and not benefit all institutions equally. Put another way, I understand the obvious short-term benefit for all institutions--some number of law school graduates who previously might have failed the exam would pass, redounding to the benefit of the institution and those graduating classes.

But that, in part, assumes that present circumstances remain the same. Invariably, they will not. Let me set up a few things that are likely to occur, and then game out some of the possible impacts these changes might have on legal education--all on the assumption that the cut score drops from 144 to 139.

First, the number of passers will increase fairly significantly. About 3480 people passed the bar when 8150 took it in July 2016 bar exam. That included about 3000 first-time passers among 5400 first-time test-takers. Bar test-takers are also up significantly this test (in part likely because of the reduction from three days to two). We should expect that number in this cohort to rise to about 4100 passing--and probably more this administration, given that there were more test-takers. We may expect more out-of-state attorneys, or people who'd failed and given up, to start attempting the test again. Statistics also indicate that the greatest increase in new attorneys will tend to be racial minorities, who have historically passed the bar exam at lower rates.

The change will also disproportionately benefit California-accredited schools: while ABA-accredited schools on the whole would see a 17% increase in pass rates, California-accredited schools would see about a 70% increase in pass rates. (Granted, far fewer graduates from these schools take the bar--only about 100 passed the July 2016 bar exam.)

Additionally, we know that this year's test-takers scored better nationwide. If that trend translates to California, too, we would expect a few hundred more on top of that figure. And we may also expect an increase of test-takers to linger for a long period of time if more people are attracted to California because of it has a modestly easier test to pass.

This obviously, in the very short term, primarily benefits those students who scored between a 139 and a 144, but would have failed the bar exam, and schools with those student populations. In the slightly longer term, it will benefit students who scored less than a 139 and on repeat have a much higher chance of securing a 139 than a 144.

About 700 to 800 (or potentially even more, depending on the volume of test-takers) extra attorneys into the system per July, and some smaller number of extra attorneys per February, should slowly exert changes to attorney prices in California, as Professor Michael Simkovic has reasoned. More lawyers means more competition, which means that prices should drop, particularly among attorneys catering to more price-sensitive clients (no one thinks Vault 100 law firms will start slashing California salaries!). It's worth noting, too, that this change may be more gradual at first--there has been a drop in test-takers overall, so the increase in pass rates may not be as dramatic unless (or until) test-taking volume rebounds to previous highs. (For instance, in the July 2013 exam, nearly 5000 passed the exam among 8900 test-takers.)

Professor Robert Anderson and I also indicated that we would expect more attorneys who would face discipline. Currently, we estimate those who score a 144 on the bar exam ultimately face a career likelihood of facing discipline at around 9%. (This compares to the overall likelihood of about 5% at 35 years since admission to the bar.) Those with a 139, we project, would likely face a career likelihood of facing discipline at around 12%. The entering cohort would have a somewhat higher likelihood of facing career discipline at some point a 35-year career.

Finally, some law schools will disproportionately benefit, typically those schools at the lower end of the performance—but not whose student bodies perform at the very bottom among law schools. If the cut score is lowered from 144 to 139, schools who had a significant “middle” of the curve, with the bulk of their graduates scoring in a range around 135 to 145, should see the bulk of improvement.

The chart below illustrates a very rough projection of the improvement in performance of each school from the July 2016 bar exam if the score had been lowered to 139. This is very rough because many factors, particularly the distribution of the students at each school, and should be taken only as rough estimates—any figure could easily be a few percentage points higher or lower; and complicating the estimate is that the July 2017 results would, of course, look different. I’m simply trying to fit the projection to last year for some reference.

As you can see, in that middle band of 12 schools, those between Cal Western and Whittier, we would expect to see gains ranging from 14 to 21 points. The 11 schools at the top of the chart would generally see more modest gains of around 8 to 12 points. The 10 schools at the bottom of the chart would also see more modest improvement, typically 6 to 11 points. (The asterisks on the chart are notations for California schools that are not accredited by the American Bar Association.) There are over 50 law schools in California, but not all had sufficient test-takers to be reported in the California data.

What might these factors do to legal education in California? Potentially, quite a bit. I sketch out some possible outcomes—with an emphasis on their potentiality. A change from 144 to 139 is somewhat modest but, in a state as large as California with as many law schools and lawyers, could have significant effects. Here are a few possible things that could occur:

* * *

At least some law schools will admit larger classes. To the extent law schools were reluctant to admit larger classes because of concerns about bar passage rates, those schools will be more inclined to admit larger student bodies. Of course, there are still other reasons that schools may not increase their class sizes, or at least not substantially—they are concerned about their LSAT and UGPA medians for USNWR rankings purposes, they may be worried about finding meaningful legal employment for a larger number of graduates, and so on. But, at least one barrier in the admissions calculus has been partially removed.

Higher-ranked law schools may begin admitting more students that recently historically matriculated to lower-ranked law schools. That is, a new kind of competition may begin. In light of the thought mentioned above, it may not simply be that schools admit larger classes; they may be grabbing applicants who would have attended lower-ranked schools.  This would exert downward pressure on lower-ranked schools in the event that competition for their prospective students increased.

Higher-ranked law schools may see improved racial diversity profiles among incoming classes, potentially at the expense of lower-ranked schools. This is good news for highly-ranked schools and students from racially diverse backgrounds. The lower score will tend to benefit racial minorities, as the data has shown that minorities fail the bar at higher rates. So highly-ranked schools can admit more diverse student bodies with greater confidence of their success. Of course, this will exert downward pressure on lower-ranked schools, who may see their diversity applicant pools dwindle or face pools of applicants with worse predictors than in past years.

Law schools will experience more price sensitivity from prospective law students. That is, the value of the law degree should decline in California, as the volume of attorneys increases and the price for lawyers drops. That should, in turn, make law students more skeptical of the existing value proposition of a law degree. Law schools that have relied on high tuition prices have benefited from the high bar exam cut score, because opportunities for attorneys have been relatively scarce; the drop in cut score will dilute the value of the degree and perhaps require some cost-cutting at law schools. This is not to say that an artificial constriction on the supply of lawyers is a good thing because it props up costs (in my personal view, I think it's quite a bad thing); but, it is to say that lowering the score will have the effect of making cost-sensitivity an increasing possibility.

California-accredited law schools will have opportunities to thrive. Look again at the chart above. San Joaquin (which had 45 first-time test-takers in July 2017) would have a projected bar pass rate of 50%. Lincoln Sacramento (which had 42 first-time test-takers) would have a projected bar pass rate of 47%. These exceed some ABA-accredited schools and start to look quite attractive to prospective law students. That’s particularly true given the tuition at these institutions. The figure below displays the full-time academic year tuition in 2016 for each of these institutions. (For institutions on the credit-hour payment model, I used 28 academic units; for Lincoln Sacramento, a four-year program, I took the total price and divided by three.) I put the schools in rank order of their (projected) bar exam performance. (As a caveat, the actual price at many institutions is much lower because many students receive scholarships that discount tuition; but, for present comparative purposes, I'm using sticker price.)

(It's worth noting in the chart above that an institution like La Verne, which charges much lower tuition than peer institutions, may see a similar benefit.) For those who oppose the regulatory burden of ABA-accreditation and wish that non-accredited institutions have an opportunity to thrive, California (with more than 30 non-ABA-accredited schools) may offer a more meaningful experiment in that effort if the cut score is lowered.

Negative impact in USNWR for elite schools, and positive impact in USNWR for more marginal schools. This category may not be immediately obvious to observers considering bar exam pass rates. That is, some might ask, wouldn't higher bar exam passing rates improve a school's USNWR profile? Not necessarily--particularly not if the overall passing rate increases.

USNWR measures bar pass rate not in absolute terms but in relative terms--the margin between a school's first-time passing rate in a jurisdiction and that jurisdiction's overall pass rates. If School A has a passing rate of 90% and School B 75%, showing some gap that's only part of the story: School A had a 90% rate in a jurisdiction with an overall rate of 60%, which means it actually did quite well; but School B had a 75% rate in a jurisdiction with an overall rate of 80%, which means it actually did poorly. USNWR measures that relative performance. UPDATE: I edited this for some clarity in the hypothetical.

So if School A sees its passing rate increase to 93%, but the jurisdiction's overall passing rate increases to 85%, that's bad for School A in USNWR terms--its ability to outshine others in the jurisdiction has dwindled. In a state as large as California and with such a relatively low first-time overall passing rate, this gives elite schools an opportunity to shine.

Stanford, for instance, boasted a 91% first-time bar passage rate in a jurisdiction with a 56.3% first-time pass rate, a 1.62 ratio. If the bar pass cut score is dropped to 139, the bar projects a first-time pass rate of 64.5%. Even if its pass rate increases to a projected 96%, its ratio drops to 1.49, a 0.12-point drop. The same holds true for institutions like USC (-0.08), UCLA (-0.03), and Berkeley (-0.06). These are just one factor in the USNWR ratings, and these figures are ultimately normalized and compared with other institutions nationally, but it will marginally hurt each of these schools as an institution in the rankings--even though it might benefit a small cohort of graduates each year taking the bar exam.

In contrast, schools that have had below-average bar exam performance would see a significant increase—some of them in my projections moving up 0.2 points in their ratios or even more. If the school is in the unranked tier, it might help get the school into the rankings; if they are ranked lower, it might help them move up the rankings, an added benefit to their graduates passing the bar at higher rates.

* * *

I’ll emphasize what I’ve mentioned repeatedly before but is too often lost when blog posts like this are shared. I have no particularly strong views about what the bar exam cut score ought to be—where it is, a little lower, much lower, or anything else. There are costs and benefits that go along with that, and they are judgments I confess I find myself unable to adequately assess.

But, these are my preliminarily thoughts on things that might happen if the cut score were dropped to 139. Granted, they are contingent on many other things, and it is quite possible that many of them do not happen. But they are a somewhat-evidence-based look at the future. And they show that the change in cut score may disproportionately affect some institutions in ways beyond the short-term bar exam results of cohorts of graduating law students. Time will tell how wrong I am!

California postpones an election to help one of its own

A sure sign of political manipulation of an election is delaying it. Troubled states like the Democratic Republic of the Congo, Somalia, and Haiti have recently come under United Nations scrutiny for delaying their elections.

And then there’s California, where Democrats are attempting to postpone a recall effort to hold onto a supermajority in the legislature.

In April 2017, the California legislature approved a major new gasoline tax and annual vehicle fee signed into law by Governor Jerry Brown. The tax is projected to raise $5.2 billion per year for transportation-related projects. (For perspective on the size of the tax hike, consider that the entire state of West Virginia’s total tax revenue from all sources was $5.1 billion in 2016.)

Tax hikes require a two-thirds vote of each legislative chamber, and Democrats hold precisely a supermajority in both. The tax passed with the bare minimum support in each chamber, with one Democrat opposed and one Republican vote in favor (in exchange for a half a billion dollar earmarked for special projects).

Republicans targeted Democratic Senator Josh Newman of Fullerton for a recall, which, if successful, could end Democratic supermajority control. Mr. Newman won his seat in 2016 by a slim 50.4%-49.6% margin.

Democrats complained that the recall campaign has been deceptive, as petition circulators broadcast that signing the petition would help “stop the car tax.” Rather than fight the recall in the political arena, however, they’ve tried to postpone the election.

The legislature swiftly enacted a law to include a number of dilatory tactics. First, the bill would permit those who signed a petition to withdraw their names up to 30 days after the petitions have been submitted. Many jurisdictions permit withdrawal of signatures while the petition is circulating. But to permit signers to withdraw after the petition has been submitted invites untold mischief. Recall opponents could initiate a counter-campaign to secure enough withdrawals and thwart the recall from ever happening.

Worse, the legislature enacted this law retroactively. While recall petitioners were in the midst of circulating their petition, the California legislature changed the rules on them. Petition circulators surely would have collected more signatures if such a law were on the books when they began.

The 30-day window also postpones the date of the recall, which is fixed by the California Constitution. Recalls must occur within 60 to 80 days, unless the petition is certified within 180 days of the next regularly scheduled general election. Governor Jerry Brown assuredly would call for the election at the next general election if the deadline could be pushed back long enough. So the California legislature began adding dilatory time periods to push back the recall as long as possible.

Counties must verify the validity of the signatures from the petitions, usually by a statistical sample of three percent of the signatures. They check to make sure that the signatures are authentic and come from registered voters. The new law abolishes sampling as a permissible technique and requires examination and verification of each and every signature, a costly and time-consuming endeavor. This is a thirty-fold increase in the time and cost of checking signatures. (The legislature didn’t even bother to find that recall signature fraud was a problem or that recall petitions needed special treatment from other election-related petitions. It simply made the process more cumbersome to slow it down.)

The legislature then added a 30-day window after the signature withdrawal window closes for the Department of Finance to estimate the cost of the recall. After that, the legislature tacked on another 30-day window for the Joint Legislative Budget Committee to weigh in on the cost estimate. Only then may the Secretary of State certify the sufficiency of the recall signatures.

The bill is even more absurd with its final act. After a lawsuit challenging the law, a court stayed application of the law, finding that it likely violated the “single subject rule.” California requires that laws embrace one topic, and here the legislature logrolled this election law into a budget bill. Fearing that they’d lose in court, the legislature moved with remarkable speed—in a single day, August 24, a newly-amended clean election bill made its way through both chambers and received the governor’s signature. There is a chance that a state court still finds the law unconstitutional, given, for instance, its retroactive effect, and its tenuous reasons for delaying the election.

The law will affect recalls in more than just Mr. Newman’s race. Efforts to recall Judge Aaron Persky, criticized for his lenient sentence handed down to Brock Turner, convinced of sexual assault at Stanford University, will face similar delays.

Even in 2003, when California’s voters recalled Governor Gray Davis just 9 months into his term, the legislature didn’t attempt to thwart the voters.

The successive and repeated delays all but guarantee that Mr. Newman's recall, like virtually all recall elections, will be pushes to next June’s primary election. True, Mr. Newman must still, at some point, face recall. But the California Constitution’s 60-to-80 day guarantee for recalls has become a nullity.

Would Jesus oppose partisan gerrymandering?

The title may be slightly glib, but a biblical allusion caught my attention as I was reading the briefs in Gill v. Whitford, the partisan gerrymandering case before the Supreme Court. The reference appeared in the amicus brief of Heather Gerken, Jonathan Katz, Gary King, Larry Sabato, and Sam Wang--an impressive lineup, to be sure! The biblical references occur in a passage about the ubiquity of the principle of symmetry:

While modern discrimination law is replete with examples of symmetry standards, the principle’s roots are ancient. One finds, for instance, examples in Judeo-Christian ethics, Genesis 13:8-9; Matthew 7:12. The notion of turning the tables is so powerful that it is a canon of literature, William Shakespeare, A Mid-Summer Night's Dream; William Shakespeare, Twelfth Night; Mark Twain, The Prince and the Pauper (1881), music, W.S. Gilbert & Arthur Sullivan, H.M.S. Pinafore (1878), and moral philosophy, John Rawls, A Theory of Justice 73-78 (rev. ed. 1999). This measure of fairness is deployed across cultures. See Cinderella Across Cultures (Martine Hennard Dutheil de la Rochère et al. eds. 2016); Heather K. Gerken, Second Order Diversit, 118 Harv. L. Rev. 1099, 1146 & n.124 (2005) (discussing Japanese tradition). Even children rely on the time-honored strategy of “I cut, you choose.”

So, no, the brief is not about whether Jesus would support or oppose partisan gerrymandering. Instead, it is a biblical allusion to the principle of symmetry.

Matthew 7:12 is the "Golden Rule": "So whatever you wish that others would do to you, do also to them, for this is the Law and the Prophets."

Unfortunately, I think this gets symmetry wrong--the Christian faith, rightly understood, including the Golden Rule, is quite asymmetrical.

Consider the Golden Rule itself: it is to do to others as you would wish they would do to you. There is no expected return from others. Indeed, there is a likelihood that others would not reciprocate. But there is no expectation of anything in return for those who adhere to the Golden Rule. The command from Jesus is to do without any expectation of anything in return. The Golden Rule can be misconstrued as anticipating or expecting some kind of mutual respect toward one another. It isn't that, as much as we might want everyone to respect one another. Instead, it is about the radical self-giving of the Christian to all others--giving, without expecting anything in return.

The brief offers the simple summary of symmetry: "Partisan symmetry is a deeply intuitive standard for measuring discrimination. It asks a simple question: what would happen if the tables were turned?" But, I think, the Gospels are replete with expectations for the Christian tradition of asymmetrical treatment and expectations.

From earlier in the Sermon on the Mount in Matthew 5, for instance, Jesus expressly rebukes a "turn the tables" standard: "You have heard that it was said, ‘An eye for an eye and a tooth for a tooth.’ But I say to you, Do not resist the one who is evil. But if anyone slaps you on the right cheek, turn to him the other also. And if anyone would sue you and take your tunic, let him have your cloak as well. And if anyone forces you to go one mile, go with him two miles. Give to the one who begs from you, and do not refuse the one who would borrow from you."

This, of course, doesn't mean that principle of governance can't be dictated by norms like symmetry. The brief is correct that symmetry has an extensive legal and non-legal tradition. (Indeed, the "eye for an eye" reference was omitted, surely a strong symmetrical standard!) And it might be that in establishing rules pertaining to representative government, symmetry is a sensible standard.

But, it is to suggest something slightly more modest. Biblical allusions can be a valuable device in making a persuasive argument. But precision of understanding biblical claims is, perhaps, just as important.

Bar exam scores rebound to highest point since 2013

After last year's slight year-over-year improvement in bar exam scores, bar exam scores are up again. The scaled mean of the Multistate Bar Exam rose 1.4 points to 141.7, the highest since 2013, which was 144.3, shortly before a hasty collapse in scores. (The MBE score is a good indicator of bar pass rates to come nationwide, but it's hardly a perfect indicator in every jurisdiction.)

scaledmbescores2017.png

For perspective, California's "cut score" is 144, Virginia 140, Texas 135, New York 133. A bar score of 141.7 is comparable to 2014 (141.5), 2005 (141.6), and 2003 (141.6) in recent years.

This is good news for test-takers and law schools--perhaps the qualifications of students have rebounded a bit as schools improved their incoming classes a few years ago; perhaps students are putting more effort into the bar than previous years; or other factors. We should see a modest rise in pass rates in most jurisdictions, comparable to where they were three years ago.

Note: I chose a non-zero Y-axis to show relative performance.

An odd and flawed last-minute twist in the California bar exam discussion

My colleague Rob Anderson last night blogged about the strange turn in a recent report from the staff at the California State Bar. Hours ahead of today's meeting of the Board of Trustees, which will make a recommendation to the California Supreme Court about the appropriate "cut score" on the bar exam, new information was shared with the Board, which can be found in the report here. (For background on some of my longer thoughts on this debate, see here.)

The report doubles down on its previous claim that there is "no empirical evidence available that indicates California lawyers are more competent than those in other states." (As our draft study of the relationship between attorney discipline and bar scores in California discusses, we concluded that we lacked the ability to compare discipline rates across states because of significant variances in how state bars may handle attorney misconduct.)

But it's now added a new claim: "Nor is there any data that suggests that a higher cut score reduces attorney misconduct." Our paper is one empirical study that expressly undermines this claim. Rob digs into some of the major problems with this assertion and the "study" that comes from it; his post is worth reading. I'd like to add a couple more.

First, the paper makes an illogical jump: "Nor is there any data that suggests a higher cut score reduces attorney misconduct" to "But based on the available data, it appears unlikely that changing the cut score would have any impact on the incidence of attorney misconduct." These are two different claims. One is an absence of evidence; the other is an affirmative finding relating to the evidence. Additionally, the adjective "unlikely" adds a level of certainty--how low is the probability? And on what is this judgment made? Furthermore, the paragraph is self-refuting: "Given the vast differences in the operation of different states' attorney discipline systems, these discipline numbers should be read with caution." Caution indeed--perhaps not read at all! That is, there's no effort to track differences among the states and control for those differences. (This is a reason we couldn't do that in our study.)

Apart from, as Rob points out, other hasty flaws, like misspelling "Deleware" and concluding that California's discipline rate of 2.6 per thousand is "less than a third" of "Deleware"'s 4.7 per thousand, it's worth considering some other problems in this form of analysis.

At a basic level, in order to compare states based on discipline rates, it must be the case that the other factors do not differ dramatically among states. But if the other factors do not differ dramatically among states, and bar pass score also does not matter, then the states should have roughly equal rates, which they don't.

The figure itself demonstrates a number of significant problems.

First, Figure 7 compares cut score with attorney discipline. But it uses a single year's worth of data, 2015. The sample size is absurdly small--it projects, for instance, the State of Vermont's discipline rate based on a sample of 1 (the total attorneys disciplined in 2015). The ABA has such data for several years, but this report doesn't collect that. In contrast, ours uses over 40 years of California discipline data from over 100,000 attorney records.

Second, the figure doesn't control for years of practice, which can affect discipline rates. That is particularly the case if the cohort of licensed attorneys in the state skews younger or older. We find that attorneys are more likely to face discipline later in their careers, and our study accounts for years of practice.

Third, the figure doesn't recognize variances in the quality of test-takers in each state. In July 2016, for instance, California's mean MBE score was a 142.4, but Tennessee's was a 139.8. Many states don't disclose state-specific MBE data. But two states with similar cut scores may have dramatically different abilities among their test-takers, some with disproportionately higher scores. Our study accounts for differences in individual test-taker scores by examining the typical scores of graduates of particular law schools, and of the differences in typical scores between first-time test-takers and repeaters.

Fourth, the figure treats the "cut score" as static in all jurisdictions, when it has changed fairly significantly in some. This is in stark contrast to the long history of California's cut score. California has tethered its 1440 to earlier standards when it sought applicants to score about 70% correct on a test, so even when it has changed scoring systems (as it did more than 30 years ago), it has tried to hold that score as constant as it can. Other states lack that continuity when adopting the MBE or other NCBE-related testing materials, have changed their cut scores, or have altered their scoring methods. Tennessee, for instance, only five years ago adopted scaling essay scores to the MBE, and failure to do so assuredly resulted in inconsistent administration of standards; further, Tennessee once permitted those with a 125 MBE to pass with sufficient "passing" scores on the unscaled essays. South Carolina at one time required a 125 MBE score, and didn't scale its essays. Evaluating state attorney discipline rates from attorneys admitted to the bar over several decades based on a cut score from the July 2016 test cannot adequately measure the cut score.

Let me emphasize a couple of points. I do wish that we had the ability to compare attorney discipline rates across states. I wish we could dive into state-specific data in jurisdictions where they changed the cut score, and evaluate whether discipline rates changed among the cohorts of attorneys under different standards.

But one of the things our study called for was for the State Bar to use its own internally-available data on the performance of its attorneys on the bar exam, and evaluate that when assessing discipline. The State Bar instead chose this crude and flawed process to demonstrate something else.

Finally, let me emphasize one last point, which I continue to raise in this discussion. Our study demonstrates that lower California bar scores correlate with higher attorney discipline rates, and lowering the bar score will result in more attorneys subject to discipline. But, of course, one can still conclude in a cost-benefit analysis that this trade-off is worth it--that the discipline rates are not sufficient for necessary concern, that they often take years to manifest, that access to justice or other real benefits are worth the trade-off, and so on.

But it is disappointing to ignore or use deeply flawed data about the relationship between attorney discipline and the bar exam cut score in this process, particularly when dumped the night before the Trustees meet to evaluate the issue.

A few longer thoughts on the four debates about the bar exam

As the debate rages in California and other places about the utility of the bar exam, it's become fairly clear that a number of separate but interrelated debates have been conflated. There are at least four debates that have been raging, and each requires a different line of thoughts--even if they are all ostensibly about the bar exam.

First, there is the problem of the lack of access to affordable legal representation. Such a lack of access, some argue, should weigh in favor of changing standards for admission to the bar, specifically regarding the bar exam. But I think the bar exam is only a piece of this debate, and perhaps, in light of the problem, a relatively small piece. Solutions such as implementing the Uniform Bar Exam; offering reciprocity for attorneys admitted in other states; reducing the costs to practice law, such as lowering the bar exam or annual licensing fees; finding ways to make law school more affordable; or opening up opportunities for non-attorneys to practice limited law, as states like Washington have done should all be matters of consideration in a deeper and wider inquiry. (Indeed, California's simple decision to reduce the length of the bar exam from three days to two appears to have incentivized prospective attorneys to take the bar exam: July 2017 test-takers are up significantly year-over-year, most of that not attributable to repeat test-takers.)

Second, there is the problem of whether the bar exam adequately evaluates the traits necessary to determine whether prospective attorneys are minimally competent to practice. It might be the case that requiring students to memorize large areas of substantive law and evaluating their performance on a multiple-choice and essay test is not an ideal way for the State Bar to operate. Some have pointed to a recent program in New Hampshire to evaluate prospective attorneys on a portfolio of work developed in law school rather than the bar exam standing alone. Others point to Wisconsin's "diploma privilege," where graduates of the University of Wisconsin and Marquette University are automatically admitted to the bar. An overhaul of admissions to the bar generally, however, is a project that requires a much larger set of considerations. Indeed, it is not clear to me that debates over things like changing the cut score, implementing the UBE, and the like are even really related to this issue. (That said, I do understand those who question the validity of the bar exam to suggest that if it's doing a poor job of separating competent from incompetent attorneys, then it ought to have little weight and, therefore, the cut score should be relatively low to minimize its impact among likely-competent attorneys who may fail.)

Third, there is the problem of why bar exam passing rates have dropped dramatically. This is an issue of causation, one that has not yet been entirely answered. It is not because the test has become harder, but some have pointed to incidents like ExamSoft or the addition of Civil Procedure as factors that may have contributed to the decline. A preliminary inquiry from the California State Bar, examining the decline just in California, identified that a part of the reason for the decline in bar passage scores has been the decline in the quality of the composition of the test-takers. I became convinced that was the bulk of the explanation nationally, too. An additional study in California is underway to examine this effect with more granular school-specific data. If the cause is primarily a decline in test-taker quality and ability, then lowering the cut score would likely change the quality and ability of the pool of available attorneys. But if the cause is attributable to other causes, such as changes in study habits or test-taker expectations, then lowering the cut score may have less of such an impact. (Indeed, it appears that higher-quality students are making their way through law schools now.) Without a thorough attribution of cause, it is difficult to identify what the solution ought to be to this problem.

Fourth, there is the debate over what the cut score ought to be for the bar exam. I confess that I don't know what the "right" cut score is--Wisconsin's 129, Delaware's 145, something in between, or something different altogether. I'm persuaded that pieces of evidence, like California's standard-setting study, may support keeping California's score roughly in place. But it is just one component of many. And, of course, California's high cut score means that test-takers fail at higher rates despite being more capable than most test-takers nationally. Part of that is, I'm not sure I fully appreciate all the competing costs and benefits that come along with changes in the cut score. While my colleague Rob Anderson and I find that lower bar scores are correlated with higher career discipline rates, facts like these can only take one so far in evaluating the "right" cut score. Risk tolerance and cost-benefit analysis have to do the real work.

(I'll pause here to note the most amusing part of critiques of Rob's and my paper. We make a few claims: lower bar scores are correlated with higher career discipline rates; lowering the cut score will increase the number of attorneys subject to higher career discipline rates; the state bar has the data to evaluate with greater precision the magnitude of the effect. No one has yet disputed any of these claims. We don't purport to defend California's cut score, or defend a higher or lower score. Indeed, our paper expressly disclaims such claims! Nevertheless, we've faced sustained criticism for a lot of things our paper doesn't do--which I suppose shouldn't be surprising given the sensitivity of the topic for so many.)

There are some productive discussions on this front. Professor Joan Howarth, for example, has suggested that states consider a uniform cut score. Jurisdictions could aggregate data and resources to develop a standard that they believe best accurately reflects minimum competence--without the idiosyncratic preferences of this state-by-state process. Such an examination is worth serious consideration.

It's worth noting that state bars have done a relatively poor job of evaluating the cut scores. Few evaluate them much at all, as California's lack of scrutiny for decades demonstrates. (That said, the State Bar is now required to undertake an examination of the bar exam's validity at least once every seven years.) States have been adjusting, and sometimes readjusting, the scores with little explanation.

Consider that just in 2017 alone, Connecticut is raising its cut score from 132 to 133, Oregon is lowering it from 142 to 137, Idaho from 140 to 136, and Nevada from 140 to 138. Some states have undergone multiple revisions in a few years. Montana, for instance, raised its cut score from 130 to 135 for fear it was too low, then lowered it to 133 for fear it was too high. Illinois planned on raising its cut score from 132 in 2014 to 136 in 2016, then, after raising the score to 133, delayed implementing the 136 cut score until 2017, and delayed again in 2017 "until further order." Certainly, state bars could benefit from more, and better, research.

Complicating these inquiries are mixed motives of many parties. My own biases and priors are deeply conflicted. At times, I find myself distrustful of any state licensing systems that restrict competition, and wonder whether the bar exam is very effective at all, given the closed-book memory-focused nature of the test. I worry when many of my students who'd make excellent attorneys fail the bar, in California and elsewhere. At other times, I find myself persuaded by studies concerning the validity of the test (given its high correlation to law school grades, which, I think, as a law professor, are often, but not always, good indicators of future success), and by the fact that, if there's going to be a licensing system in place, then it ought to try to be as good as it can be given its flaws and all.

At times, though, I realize these thoughts are often in tension because they are sometimes addressing different debates about the bar exam generally--maybe I'd want a different bar exam, but if we're going to have one it's not doing such a bad job; maybe we want to expand access to attorneys, but the bar exam is hardly the most significant barrier to access; and so on. And maybe even here, my biases and priors color my judgment, and with more information I'd reach a different conclusion.

In all, I don't envy the task of the California Supreme Court, or of other state bar exam authorities, during a turbulent time for legal education in addressing the "right" cut score for the bar exam. The aggressive, often vitriolic, rhetoric from a number of individuals concerning our study in discipline rates is, I'm sure, just a taste of what state bars are experiencing. But I do hope they are able to set aside the lobbying of law deans and the protectionist demands of state bar members to think carefully and critically, with precision, about the issues as they come.

A poor attorney survey from the California State Bar on proposals to change the bar exam cut score

I'm not a member of the California State Bar (although I've been an active member of the Illinois State Bar for nearly 10 years), so I did not receive the survey that the state bar circulate late last week. Northwestern Dean Dan Rodriguez tweeted about it, and after we had an exchange kindly shared the survey with me.

I've defended some of the work the Bar has done, such as its recent standard-setting study, which examined bar test-taker essays to determine "minimum competence." (I mentioned the study is understandably limited in scope and particularly given time. The Bar has shared a couple of critiques of the study here, which are generally favorable but identify some of the weaknesses in the study.) And, of course, one study should not so determine what the cut score ought to be, but it's one point among many studies coming along.

Indeed, the studies, so far, have been done with some care and thoughtfulness despite the compressed time frame. Ron Pi, Chad Buckendahl, and Roger Bolus have long been involved in such projects, and their involvement here has been welcome.

Unfortunately, despite my praise with some caveats about understandable limitations, the State Bar has circulated a poor survey to members of the State Bar about the proposed potential changes to the cut score. Below are screenshots of the email circulated and most of the salient portions of the survey.

It is very hard to understand what this survey can accomplish except to get a general sense of the bar about their feelings about what the cut score ought to be. And it's not terribly helpful in addressing the question about what the cut score ought to be.

For instance, there's little likelihood that attorneys understand what a score of 1440, 1414, or "lower" means. There's also a primed negativity in the question "Lower the cut score further below the recommended option of 1414"--of course, there were two recommended options (hold in place, or lower to 1414), with not just "below" but "further below." Additionally, what do these scores mean to attorneys? The Standard-Setting Study was designed to determine what essays met the reviewing panel's definition of "minimum competence"; how would most lawyers out there know what these numbers mean in terms of defining minimum competence?

The survey, instead, is more likely a barometer about how protectionist members of the State Bar currently are. If lawyers don't want more lawyers competing with them, they'll likely prefer the cut score to remain in place. (A more innocent reason is possible, too, a kind of hazing: "kids these days" need to meet the same standards they needed to meet when getting admitted to the bar.) To the extent the survey is controlling whether to turn the spigot to control the flow of lawyers, to add more or to hold it in place, it represents the worst that a state bar has to offer.

The survey also asks, on a scale of 1 to 10, the "importance" attorneys assign to "statements often considered relevant factors in determining an appropriate bar exam cut score." These answers vary from the generic that most lawyers would find very important, like "maintaining the integrity of the profession," to answers that weigh almost exclusively in favor of lowering the cut score, like "declining bar exam pass rates in California."

One problem, of course, is that these rather generic statements have been tossed about in debates, but how is one supposed to decide which measures are appropriate costs and benefits? Perhaps this survey is one way of testing the profession's interests, but it's not entirely clear why two issues are being conflated: what the cut score ought to be to establish "minimum competence," and the potential tradeoffs at stake in decisions to raise or lower the cut score.

In a draft study with Rob Anderson, we identified that lower bar scores are correlated with higher discipline rates and that lowering the cut score would likely result in higher attorney discipline. But we also identified a lot of potential benefits from raising the score, which have been raised by many--greater access to attorneys, lower costs for legal services for the public, and so on. How should one weigh those costs and benefits? That's the sticky question.

I'm still not sure what the "right" cut score is. But I do feel fairly certain that this survey to California attorneys is not terribly helpful in moving us toward answering that question.

How much post-JD non-clerkship work experience do entry-level law professors have?

Professor Sarah Lawsky offers her tireless and annual service compiling entry-level law professor hires. One chart of interest to me is the year of the JD: in recent years, about 10-20% of entering law professors obtained their JD within the last four years; 45-60% in the last five to nine years; and 25-30% in the last 10 to 19 years, with a negligible number at least 20 years ago.

But there's a different question I've had, one that's been floating out there as a rule of thumb: how much practice experience should an entering law professor have? Of course, "should" is a matter of preference. Most aspiring law professors often mean it to ask, "What would make me most attractive as a candidate? Or, what are schools looking for?"

There are widely varied schools of thought, I imagine, but a common rule of thumb I'd heard was three to five years of post-clerkship experience, and probably no more. (Now that I'm trying to search where I might have first read that, I can't find it.) In my own experience, I worked for two years in practice after clerking. Some think more experience is a good thing to give law professors a solid grounding in the actual practice of law they're about to teach, but some worry too much time in practice can inhibit academic scholarship (speaking very generally); some think less experience is a good or a bad thing for mostly the opposite reasons. (Of course, what experience law professors ought to have, regardless of school hiring preferences, is a matter for a much deeper normative debate!)

I thought I'd do a quick analysis of post-JD work experience among entry-level law professors. I looked at the 82 United States tenure-track law professors listed in the 2016 entry-level report. I did a quick search of their law school biographies, CVs, or LinkedIn accounts for their work experience and put it into one of several categories: 0, 1, 2, 3, 4, 5+, "some," or "unknown." 5+ because I thought (perhaps wrongly!) that such experience would be relatively rare and start to run together over longer periods of time; "some" meaning the professor listed post-JD work experience but the dates were not immediately discernible; or "unknown" if I couldn't tell.

I also chose to categorize "post-clerkship" experience. I think clerkship experience is different in kind, and it still rightly is a kind of work experience, but I was interested in specifically the non-clerkship variety. I excluded independent consultant work, and judicial staff attorney/clerk positions, but I included non-law-school fellowships. Any academic position was also not included in post-JD non-clerkship work experience. I excluded pre-JD work experience, of course, but included all post-JD work experience whether law-related or not (e.g., business consulting). All figures are probably +/-2.

There are going to be lots of ways to slice and dice the information, so I'll offer three different visualizations. First, 23 of the 82 entering law professors (28%) had no post-JD non-clerkship work experience. 56 had at least some, and 3 had unknown experience. That struck me as a fairly large number of "no work experience." (If you included clerkships, 13 of those "nones" had clerkships, and 10 had no clerkship experience.) I thought most of the "nones" might be attributable to increases in PhD/SJD/DPhil hires, and that accounts for about two-thirds of that category.

I then broke it down by years' service.

24 had one to four years' experience; 21 had five or more years' experience; and 11 had "some" experience, to an extent I was unable to quickly determine. (Be careful with this kind of visualization; the "some" makes the 1-4 & 5+ categories appear smaller than they actually are!) I was surprised that 21 (about 26%) had at least five years' post-JD non-clerkship work experience, and many had substantially more than that. Perhaps I shouldn't have been surprised, as about 30% earned their JD at least 10 years ago; but I thought a good amount of that might have been attributable to PhD programs, multiple clerkships, or multiple VAPs. It turns out 5+ years' experience isn't "too much" based on recent school tenure-track hiring.

For the individual total breakdown, here's what I found:

This visualization overstates the "nones," because it breaks out each category, unlike the first chart, but it's each category I collected. Note the big drop-off from "0" to "1"!

Again, all figures likely +/-2 and based on my quickest examination of profiles. If you can think of a better way of splicing the data or collecting it in the future, please let me know!