A poor attorney survey from the California State Bar on proposals to change the bar exam cut score

I'm not a member of the California State Bar (although I've been an active member of the Illinois State Bar for nearly 10 years), so I did not receive the survey that the state bar circulate late last week. Northwestern Dean Dan Rodriguez tweeted about it, and after we had an exchange kindly shared the survey with me.

I've defended some of the work the Bar has done, such as its recent standard-setting study, which examined bar test-taker essays to determine "minimum competence." (I mentioned the study is understandably limited in scope and particularly given time. The Bar has shared a couple of critiques of the study here, which are generally favorable but identify some of the weaknesses in the study.) And, of course, one study should not so determine what the cut score ought to be, but it's one point among many studies coming along.

Indeed, the studies, so far, have been done with some care and thoughtfulness despite the compressed time frame. Ron Pi, Chad Buckendahl, and Roger Bolus have long been involved in such projects, and their involvement here has been welcome.

Unfortunately, despite my praise with some caveats about understandable limitations, the State Bar has circulated a poor survey to members of the State Bar about the proposed potential changes to the cut score. Below are screenshots of the email circulated and most of the salient portions of the survey.

It is very hard to understand what this survey can accomplish except to get a general sense of the bar about their feelings about what the cut score ought to be. And it's not terribly helpful in addressing the question about what the cut score ought to be.

For instance, there's little likelihood that attorneys understand what a score of 1440, 1414, or "lower" means. There's also a primed negativity in the question "Lower the cut score further below the recommended option of 1414"--of course, there were two recommended options (hold in place, or lower to 1414), with not just "below" but "further below." Additionally, what do these scores mean to attorneys? The Standard-Setting Study was designed to determine what essays met the reviewing panel's definition of "minimum competence"; how would most lawyers out there know what these numbers mean in terms of defining minimum competence?

The survey, instead, is more likely a barometer about how protectionist members of the State Bar currently are. If lawyers don't want more lawyers competing with them, they'll likely prefer the cut score to remain in place. (A more innocent reason is possible, too, a kind of hazing: "kids these days" need to meet the same standards they needed to meet when getting admitted to the bar.) To the extent the survey is controlling whether to turn the spigot to control the flow of lawyers, to add more or to hold it in place, it represents the worst that a state bar has to offer.

The survey also asks, on a scale of 1 to 10, the "importance" attorneys assign to "statements often considered relevant factors in determining an appropriate bar exam cut score." These answers vary from the generic that most lawyers would find very important, like "maintaining the integrity of the profession," to answers that weigh almost exclusively in favor of lowering the cut score, like "declining bar exam pass rates in California."

One problem, of course, is that these rather generic statements have been tossed about in debates, but how is one supposed to decide which measures are appropriate costs and benefits? Perhaps this survey is one way of testing the profession's interests, but it's not entirely clear why two issues are being conflated: what the cut score ought to be to establish "minimum competence," and the potential tradeoffs at stake in decisions to raise or lower the cut score.

In a draft study with Rob Anderson, we identified that lower bar scores are correlated with higher discipline rates and that lowering the cut score would likely result in higher attorney discipline. But we also identified a lot of potential benefits from raising the score, which have been raised by many--greater access to attorneys, lower costs for legal services for the public, and so on. How should one weigh those costs and benefits? That's the sticky question.

I'm still not sure what the "right" cut score is. But I do feel fairly certain that this survey to California attorneys is not terribly helpful in moving us toward answering that question.

How much post-JD non-clerkship work experience do entry-level law professors have?

Professor Sarah Lawsky offers her tireless and annual service compiling entry-level law professor hires. One chart of interest to me is the year of the JD: in recent years, about 10-20% of entering law professors obtained their JD within the last four years; 45-60% in the last five to nine years; and 25-30% in the last 10 to 19 years, with a negligible number at least 20 years ago.

But there's a different question I've had, one that's been floating out there as a rule of thumb: how much practice experience should an entering law professor have? Of course, "should" is a matter of preference. Most aspiring law professors often mean it to ask, "What would make me most attractive as a candidate? Or, what are schools looking for?"

There are widely varied schools of thought, I imagine, but a common rule of thumb I'd heard was three to five years of post-clerkship experience, and probably no more. (Now that I'm trying to search where I might have first read that, I can't find it.) In my own experience, I worked for two years in practice after clerking. Some think more experience is a good thing to give law professors a solid grounding in the actual practice of law they're about to teach, but some worry too much time in practice can inhibit academic scholarship (speaking very generally); some think less experience is a good or a bad thing for mostly the opposite reasons. (Of course, what experience law professors ought to have, regardless of school hiring preferences, is a matter for a much deeper normative debate!)

I thought I'd do a quick analysis of post-JD work experience among entry-level law professors. I looked at the 82 United States tenure-track law professors listed in the 2016 entry-level report. I did a quick search of their law school biographies, CVs, or LinkedIn accounts for their work experience and put it into one of several categories: 0, 1, 2, 3, 4, 5+, "some," or "unknown." 5+ because I thought (perhaps wrongly!) that such experience would be relatively rare and start to run together over longer periods of time; "some" meaning the professor listed post-JD work experience but the dates were not immediately discernible; or "unknown" if I couldn't tell.

I also chose to categorize "post-clerkship" experience. I think clerkship experience is different in kind, and it still rightly is a kind of work experience, but I was interested in specifically the non-clerkship variety. I excluded independent consultant work, and judicial staff attorney/clerk positions, but I included non-law-school fellowships. Any academic position was also not included in post-JD non-clerkship work experience I excluded pre-JD work experience, of course, but included all post-JD work experience whether law-related or not (e.g., business consulting). All figures are probably +/-2.

There are going to be lots of ways to slice and dice the information, so I'll offer three different visualizations. First, 23 of the 82 entering law professors (28%) had no post-JD non-clerkship work experience. 56 had at least some, and 3 had unknown experience. That struck me as a fairly number of "no work experience." (If you included clerkships, 13 of those "nones" had clerkships, and 10 had no clerkship experience.) I thought most of the "nones" might be attributable to increases in PhD/SJD/DPhil hires, and that accounts for about two-thirds of that category.

I then broke it down by years' service.

24 had one to four years' experience; 21 had five or more years' experience; and 11 had "some" experience, to an extent I was unable to quickly determine. (Be careful with this kind of visualization; the "some" makes the 1-4 & 5+ categories appear smaller than they actually are!) I was surprised that 21 (about 26%) had at least five years' post-JD non-clerkship work experience, and many had substantially more than that. Perhaps I shouldn't have been surprised, as about 30% earned their JD at least 10 years ago; but I thought a good amount of that might have been attributable to PhD programs, multiple clerkships, or multiple VAPs. It turns out 5+ years' experience isn't "too much" based on recent school tenure-track hiring.

For the individual total breakdown, here's what I found:

This visualization overstates the "nones," because it breaks out each category, unlike the first chart, but it's each category I collected. Note the big drop-off from "0" to "1"!

Again, all figures likely +/-2 and based on my quickest examination of profiles. If you can think of a better way of splicing the data or collecting it in the future, please let me know!

How should we think about law school-funded jobs?

One of the most contentious elements of the proposed changes to the way law schools report jobs to the American Bar Association is how to handle law school-funded jobs. In my letter, I noted that more information and more careful examination of costs and benefits would be needed before reaching a decision about how best to treat them. The letters to the ABA mostly fall on a more black-and-white alignment: school-funded positions should be treated like any other job, or they should remain a separate category as they have the last two years.

Briefly, school-funded positions may offer opportunities for students to practice, particularly in public interest positions, in a transition period toward opportunities where funding may be lacking. At their best, it provides students with much-needed legal experience in these fields and can help them continue a long career in such opportunities. At their worst, however, they are opportunities for schools to inflate their employment placement by spending money on student placement with no assurance about what that placement looks like after the year is complete.

In the last couple of years, the number of positions that would even qualify as "law school-funded" have been severely limited. I noted in 2016 that these positions had dropped in half, to fewer than 400 positions nationwide, accompanying the change in the USNWR reporting system that gave these positions "less weight" than non-funded positions. Jerry Organ rightly noted that much of the decline was probably attributable to the definitional change: only jobs lasting at least a year and with an annual salary of at least $40,000 would count.

This methodological change likely weeded out many of the lower-quality positions from the school-funded totals.

So, are law school-funded positions good outcomes, or not? It seems impossible to tell from the evidence, because we have essentially no data about what happens to students from these positions after the funding ceases. We have some generic assurances that they are successful in placing students into good jobs; we have others who express deep skepticism about that likelihood. One major reason I endorse the proposal to postpone the change to employment reporting data is to find out more information about what they do! (Alas, that seems unlikely.)

But we do have one piece of data from Tom Miles (Chicago), who wrote in his letter to the ABA: "97% of new graduates who have received one of our school-funded Postgraduate Public Interest Law Fellowships remained in public interest or government immediately after their fellowships; 45% of them with the organization who hosted their fellowship."

That is impressive placement. If such statistics are similar across institutions, it would be a very strong reason, in my view, to move such positions back into the "above the line" totals with other job placement.

Finally, my colleague Rob Anderson did a principal components analysis of job placement and found that law school-funded positions were a relatively good, if minor, job outcome among institutions.

It may be that the worst excesses of the recession-era practices of law schools are behind us, and that these school-funded positions are providing the kinds of opportunities that are laudable. More investigation from the ABA would be most beneficial. But it's also likely the case that the change may be quite modest in the event the ABA chooses to adopt the changes this year.

My letter to the ABA Section on Legal Education re proposed changes to law school employment reporting

On the heels of commentary from individuals like Professor Jerry Organ and Dean Vik Amar, I share the letter I sent to the ABA's Section on Legal Education regarding changes to the Employment Summary Report and the classification of law-school funded positions. (Underlying documents are available at the Section's website here.) Below is the text of the letter:

---

Dear Mr. Currier,

I:

1) petition the Council to suspend implementation of the proposal until at least the Class of 2018, and direct the Section to implement for the Class of 2017, the Employment Questionnaire as approved at the June meeting, together with the Employment Summary Report used for the Class of 2016, and

2) petition the Council to direct the Standards Review Committee to

a. delineate all of the changes in the Employment Questionnaire that would be necessary to implement the proposal, and

b. provide notice of all proposed changes to the Employment Questionnaire and Employment Summary Report and an opportunity to comment on the proposed changes before the Council takes any further action to implement the proposal.

The unusual and truncated process to adopt these proposals is reason enough to oppose the change. But the substance merits additional discussion.

In particular, I do not believe the statements made in the Mahoney Memorandum sufficiently address the costs of returning to the pre-2015 system of reporting school-funded employment figures as "above the line" totals. The Memorandum contains speculative language in justification of the position advanced ("The NLJ assumed, as would any casual reader," "Many readers may never have learned of the error," "we must assume"), language which should be the basis for further investigation and a weighing of costs and benefits, not of reaching a definitive outcome.

Additionally, the Memorandum uses incomplete statistics to advance its proposal--in particular, that "School-funded positions accounted for 2% of reported employment outcomes for the class of 2016" is more relevant if such positions are distributed roughly equally across institutions. But these positions are not distributed roughly equally, and the fact that a few institutions bear a disproportionate number of such positions should merit deeper investigation before examining the impact of such a change.

Furthermore, the Memorandum's proposal, adopted in the Revised Employment Outcomes Schools Report, includes material errors (including overlapping categories of employment by firm size, where an individual in a firm with 10 people would be both in a firm with "2-10" and "10-100"; the same for 100 and the categories "10-100" and "100-500") that never should have made it to the Section.

I find much to be lauded in the objectives of the Section in the area of disclosure. Improving disclosures to minimize reporting errors, streamline unnecessary categories, and provide meaningful transparency in ways that consumers find beneficial are good and important goals. The Section should do so with the care and diligence it has done in its past revisions, which is why it ought to suspend implementation of this proposal.

Best,

/s/ Prof. Derek T. Muller

More evidence suggests California's passing bar score should roughly stay in place

Plans to lower California's bar exam score may run up against an impediment: more evidence suggesting that the bar exam score is about where it ought to be.

My colleague Rob Anderson and I recently released a draft study noting that lower bar passage scores are correlated with higher discipline rates, and urging more collection of data before bar scores are lowered.

There are many data points that could, and should, be considered in this effort. The California state bar has been working on some such studies for months. California students are more able than students in other states but fail the bar at higher rates, because California's cut score (144, or in California's scoring 1440, simply 10x) is higher than most other jurisdictions.

Driven by concerns expressed by the deans of California law school, and at the direction of the California Supreme Court, the State Bar began to investigate whether the cut score was appropriate. One such study was a "Standard Setting Study," and its results published last week. It is just one data point, with obvious limitations, but it almost perfectly matches the current cut score in California.

A group of various practitioners looked at a batch of bar exam essays. They graded them. They assessed a score of "not competent," "competent," or "highly competent." They were refined to find that "tipping point," from "not competent" to "competent." (An evaluation of the study notes this is similar to what other states have done in their own standard-setting studies, which have resulted in a variety of changes to bar pass cut scores; and that independent evaluation identified critiques to the study but concluded it was overall sound methodology and valid results.)

The mean recommended passing score from the group was 145.1--more than a full point higher than the actual passing score! The median passing score was 143.9, almost identical to the 144.0 presently used. (The study explains why it believes the median is the better score.)

Using a +/-1 error standard of deviation, the mean score may range from 143.6 to to 148.0; the median score 141.4 to 147.7. All are well short of the 133-136 scores common in many other jurisdictions, including New York's 133. And this study is largely consistent with a study in California 30 years ago when a similar crisis arose over low passing rates, a study I identified in a recent blog post.

So, what to do with this piece of evidence? The researchers offered two recommendations for public comment and consideration: keep the score where it is; or reduce the passing score needed to 141.4 for the July 2017 exam alone. (Note: what a jackpot it would be to the bar test-takers this July if they received a one-time reprieve!) The recommendations nicely note many of the cost-benefit issues policymakers ought to consider--and includes some reasons why California has policy preferences that may weigh in favor of a lower score (at least temporarily). The interim proposal to reduce to a 141.4, one standard error below the recommended median value of 143.9, takes into account these policy considerations. Such a change may be modest, but it could result in a few hundred more bar test-takers passing the bar on the first attempt in California.

Alas, the reaction to a study like this has been predictable. Hastings Dean David Faigman--who called our study "irresponsible," accused the State Bar of "unconscionable conduct" for the July 2016 bar exam--waited until the results dropped to critique the study with another quotable adjective soundbite, labeling the study "basically useless." (A couple of other critiques are non-responsive to the study itself.)

Of course, one piece of data--like the Standard Setting Study--should not dictate the future of the bar exam. Nor should the study of my colleague Rob Anderson and me. Nor should the standard setting study 30 years ago. But they all do point toward some skepticism that the bar exam cut score it dramatically or outlandishly too high. It might be, as the cost-benefit and policy analysis in the recommendations to the state bar suggest, that some errors ought to be tolerated with a slight lowering of the score. Or it might be that the score should remain in place.

Whatever it is, more evidence continues to point toward keeping it roughly in the current place, and more studies in the future may offer new perspectives on the proper cut score.

Some good news, and some perspective, on the June 2017 LSAT

The Law School Admissions Council recently shared that LSATs administered increased significantly year-over-year: a 19.8% increase. In historical terms, the June 2017 test had more test-takers than any June since 2010, which had a whopping 23,973 (the last year of the "boom" cycle for law school admissions). That's good news for law schools looking to fill their classes with more students, and, hopefully, more qualified students. I've visualized the last decade of June LSAT administrations below.

Of course, there are many more steps along the way to a better Class of 2021: applicant quality among those test-takers, whether they turn into actual applicants, etc. And given the potential for schools to accept the GRE instead of the LSAT, LSAT administrations may slightly understate expectations for future applicants.

But one data point is worth consider, and that's repeat test-takers. LSAC discloses that data in some more opaque ways, but it's worth considering how many first-time test-takers were among the June 2017 test-takers.

First-time test-takers are a better picture of the likely changes to the quality and quantity of the applicant pool. Repeaters are permitted to use their highest score, which is a worse indicator of their quality. (They may now retake the test an unlimited number of times.) Additionally, first-time test-takers represent truly potentially new applicants, as opposed to repeaters who were probably already inclined to apply (or perhaps have applied and are seeking better scholarship offers).

Repeat test-takers have been slowly on the rise, as the graphic above (barely!) demonstrates. First-time test-takers made up 84.9% of the June 2007 LSAT administration. That number has eroded every June since, and this June saw first-time test-takers make up 74% of the administration. About 27,600 took the test, and 20,430 for the first time; compare that to June 2011, when there were fewer test-takers (about 26,800), but more who took it for the first time (21,610).

There is some good optimism for law schools looking for a boost in their admissions figures. But there's also a little perspective to consider about what these figures actually represent.

"The Kobach fallout on election security"

I have a guest post at Rick Hasen's Election Law Blog. It begins:

The Presidential Advisory Commission on Election Integrity offered its first public request this week, as Vice Chair and Kansas Secretary of State Kris Kobach requested voter information from every state. That single request has likely done long-lasting damage to the political ability of the federal government to regulate elections. In particular, any chance that meaningful election security issues would be addressed at the federal level before 2020 worsened dramatically this week.

The request is sloppy, as Charles Stewart carefully noted, and, at least in some cases, forbidden under state law. The letter was sent to the wrong administrators in some states, it requests data like “publicly-available . . . last four digits of social security number if available” (which should never be permissible), and it fails to follow the proper protocol in each state to request such data.

Response from state officials has been swift and generally opposed. It has been bipartisan, ranging from politically-charged outrage, to drier statements about what state disclosure law permits and (more often) forbids.

But the opposition reflects a major undercurrent from the states to the federal government: we run elections, not you.

Puerto Rican statehood and the effect on Congress and the Electoral College

After the low-turnout, high-pro-statehood referendum in Puerto Rico last weekend, despite the low likelihood of it becoming a state, it's worth considering the impact that statehood might have in representation and elections.

Puerto Rico would receive two Senators, increasing the size of the Senate to 102.

Census estimates project that Puerto Rico would send five members to the House. Since 1929, the House has not expanded in size, so it would mean that Puerto Rico's delegation would come at the expense of other states' delegations. In 1959, however, with the admission of Hawaii and Alaska, Congress temporarily increased in size from 435 members to 437, then dropped back down to 435 after the 1960 Census and reapportionment. Congress might do something similar with Puerto Rico upon statehood. (For some thoughts about doubling the size of the House, see my post on the Electoral College.)

Based on projections for 2020, Puerto Rico's five seats would likely come at the expense of one seat each from California, Montana, New York, Pennsylvania, and Texas. (It's worth noting these are based on the 2020 projections; Montana is likely to receive a second representative after the 2020 reapportionment.)

This would also mean that in presidential elections, Puerto Rico would have 7 electoral votes, and these five states would each lose an electoral vote. The electoral vote total would be 540, and it would take 271 votes to win.