In the Orange County Register: "Judicial vacancies threaten the rule of law"

Last week, the Orange County Register published my opinion piece, "Judicial vacancies threaten the rule of law." It begins:

There is a judicial crisis in California, but you won’t hear the judges talking about it. Those professionals work tirelessly without complaint. But California needs more federal judges, and it needs them with higher salaries. Otherwise, access to justice will be diminished, and the rule of law will be threatened.

Annual Statement, 2017

Site disclosures

Total operating cost: $192

Total content acquisition costs: $0

Total site visits: 82,014* (-33% over 2016)

Total unique visitors: 68,435 (-36% over 2016)

Total pageviews: 101,049 (-29% over 2016)

Top referrers:
Twitter (7338)
Above the Law (2787)
Facebook (1840)
Top-Law-Schools (1327)
Reddit (1050)
Election Law Blog (820)
ABA Journal (728)
Brian Leiter's Law School Reports (699)
TaxProf (608)

Most popular content (by pageviews):
Ranking the most liberal and conservative law firms (July 16, 2013) (14,035)
February 2017 MBE bar scores collapse to all-time record low in test history (Apr. 7, 2017) (12,771)
Visualizing the 2018 U.S. News law school rankings--the way they should be presented (Mar. 14, 2017) (4959)
The best prospective law students read Homer (Apr. 7, 2014) (3558)
No, the MBE was not "harder" than usual (Sept. 28, 2015) (2170)
Sorting out the Alabama Senate election possibilities in light of Roy Moore (Nov. 9, 2017) (2169)

I have omitted "most popular search results" (99% of search results not disclosed by search engine, very few common searches in 2017):

Sponsored content: none

Revenue generated: none

Platform: Squarespace

Privacy disclosures

External trackers: one (Google Analytics)

Individuals with internal access to site at any time in 2017: one (Derek Muller)

Status of 2016 faithless presidential elector litigation

One year ago, December 19, 2016, an unprecedented number of faithless electors intentionally cast (or attempted to cast) votes for candidates other than those they pledged to support, either Donald Trump or Hillary Clinton. Congress ultimately decided to count all the electoral votes as cast.

But some of these faithless (or would-be faithless) electors sued, and the litigation remains ongoing. Much like my tracking of "natural born" citizen lawsuits, I thought I'd share the status of faithless elector litigation.

California: an elector wanted to cast a vote for someone other than Hillary Clinton and Tim Kaine but ultimately voted for them.

Vinzenz Koller: lawsuit filed, Koller v. Brown (N.D. Cal. 2016-cv-07069), motion to dismiss granted Apr. 20, 2018

Colorado: two electors threatened to vote for candidates other than Hillary Clinton and Tim Kaine but ultimately voted for them. A third attempted to vote for John Kasich, but his vote was not counted, he was removed for failure to act.

Michael Baca, Polly Baca, & Robert Nemanich: lawsuit filed, Baca v. Colorado Department of State (D. Colo. 17-cv-01937), motion to dismiss granted Apr. 10, 2018, appeal filed (10th Cir. 18-1173)

Minnesota: an elector attempted to vote for Bernie Sanders instead of Hillary Clinton and was replaced.

Muhammad Abdurrahman: complaint dismissed as moot (D. Minn. 16-cv-04279); appeal field (8th Cir. 16-4551), Abdurrahman v. Swanson, affirmed Sept. 12, 2018.

Washington: four faithless electors were each fined $1000 for casting votes for candidates other than Hillary Clinton and Tim Kaine. The state administrative appeals are here.

Robert Satiacum: administrative order became final June 13, 2017.

Levi Guerra, Esther John, & Peter Chiafalo: federal lawsuit (W.D. Wash. 16-cv-01886) voluntarily dismissed; state administrative appeal to Thurston County Superior Court, Docket No. 17-2-02446-34; Guerra v. State Office of Administrative Hearings, affirmed, Dec. 8, 2017; appeal filed with Supreme Court (No. 953473), brief filed Aug. 10, 2018, set for motion calendar Oct. 2, 2018

A secret small world of "other" law school admissions

Okay, perhaps the title's a bit sensational. But American Bar Association ("ABA") data this year, for the first time, breaks out a couple of categories of 1L law school enrollment. One category is "enrollment from law school applications." The other is "other enrollment."

Typical "application" admissions occurs from the process you might expect: in a very traditional timeline, submit an application in November or December, wait for that envelope (or email?) in March or April, then enroll for a term beginning in August. Of the ABA's 37,400 first-year enrollees reported this year, 36,321 come from this category.

But another 1079 enrollees come from an "other" category. (Admittedly, this is a sliver of the overall admissions picture.) That opaque category includes four groups of enrollees:

  • Students admitted in a prior year who deferred enrollment until the current year
  • Students admitted in a prior year who took a leave of absence
  • Readmits with fewer than 15 credits
  • Students admitted with fewer than 15 credits of prior law study

This is a brand new category of ABA disclosures, designed, apparently, to capture "odd" admissions.

Of those 1079 enrollees, 419 come from just 20 schools (the 20 with the highest percentage of "other" enrollees that make up the first-year class). And these schools are hardly what one might consider peer schools.

USNWR Rank School App Enrollees Other Pct Other
1 Yale University 163 42 20.5%
2 Harvard University 477 83 14.8%
Tier 2 District of Columbia 82 11 11.8%
145 Ohio Northern University 46 6 11.5%
Tier 2 Thomas Jefferson School of Law 215 26 10.8%
Tier 2 Charleston School of Law 225 26 10.4%
Tier 2 Atlanta's John Marshall Law Shool 194 22 10.2%
20 University of Southern California 169 18 9.6%
18 Washington University 204 21 9.3%
2 Stanford University 164 16 8.9%
Tier 2 California Western School of Law 240 23 8.7%
Tier 2 Florida Coastal School of Law 97 9 8.5%
n/r Concordia Law School 44 4 8.3%
Tier 2 Widener-Commonwealth 118 10 7.8%
59 University of Missouri 85 7 7.6%
Tier 2 Western Michigan University 424 34 7.4%
8 University of Virginia 296 23 7.2%
Tier 2 Appalachian School of Law 68 5 6.8%
11 University of Michigan 299 21 6.6%
Tier 2 St. Thomas University (Florida) 173 12 6.5%

Of these 20 schools, 7 are among the top 20 in the USNWR rankings, 10 are among the lowest-ranked schools in USNWR's "Tier 2" designation; and the remaining three are unranked Conordia, 145th-ranked Ohio Northern, and 59th-ranked Missouri. It is almost an entirely binary set of schools--the very elite and the marginal.

So, here comes some speculation.

The Yale 1L class, for instance, includes 20% of a study body that did not apply in the last year--they deferred, took leave, started a handful of credits at another institution (not likely), or were readmitted with a handful of credits from Yale (again, not likely). Yale is very generous in its deferral program. Harvard's "Junior Deferral Program" likely also accounts for a significant chunk.

These admitted students as "deferrals" makes sense. Students get into their dream school, like Yale or Harvard, and rather than postpone law school and reapply in a second round of admissions, perhaps they want to postpone law school to do Teach for America, save a little more money, or travel the world, and they don't need to apply anywhere else--a deferral makes sense for such students. At many other schools, however, students would probably not defer, but reapply in a subsequent admissions cycle, hoping, perhaps, that admissions standards drop (even slightly!), or that their improved personal statement or senior year grades would put them over the top, or that an LSAT retake will make them shine.

At the other end of the spectrum, it appears that many of the more marginal schools admit a number of students who have some at-risk flag factors--for instance, those who were academically dismissed with a very small number of credits.

But, you'll note I have to speculate here. The ABA decided to lump all four of these categories into one heap, and even there failed to disclose on the public-facing website what these "other" categories even were in the first place. Perhaps in the future we'll see more granular data. Until then, we just have an opaque picture of this secret (small) world of law school admissions.

LSAT trends show increase in test-takers and project modest 2018 JD enrollment increase

In my last post, I looked at the law school enrollment figures for 2017. What might happen in 2018?

While LSAT test-takers are up, it's worth emphasizing that an increasing percentage of test-takers are repeaters, not first-time test-takers. On the flip side, the number of schools accepting the GRE as an alternative to the LSAT may understate the number of law school applicants next year.

More importantly than LSAT test-takers increasing, however, is their quality. I emphasized this years ago: the quality of the applicant pool matters in much the way that the quantity does. Professor Jerry Organ has helpfully examined the increase in quality.

(It's worth noting that LSAC changed its data for law school applicants in 2016; it explains, "Archived data for 2015 and prior years include applicants for the fall term only and also include deferrals; therefore, archived data are not comparable to current data." They are, however, close enough for our present comparative purposes; and 2016-2017 are comparable, albeit I only have an estimate for 2017 right now.)

Let's also provide some comparisons in recent LSAT & enrollment data. We saw 1L JD enrollment largely flat for the fourth straight year, and the overall law school enrollment figure may well have bottomed out.

But LSAT test-takers have increased each year since 2015: from 101,600, to 105,900, to 109,400, with a projected 125,000 test-takers this cycle. LSAT test-takers are not proportionately translating into applicants; indeed, despite a 3.3% increased in LSATs administered last year, applicants actually declined slightly, and matriculants increased only 0.8%. Part of this, as I've identified, is attributable to increased numbers of repeaters taking the LSAT. But there are other reasons why LSATs administered are not translating into applicants--reasons I could only speculate about at this time. In part, low quality test-takers may have contributed to inflated LSAT statistics, but we may be seeing a reversal.

That said, surely such a significant increase in the percentage of LSAT test-takers would yield at least some increase in applicants and matriculants--particularly given the quality of those test-takers. Only time will tell. For now, stagnant JD enrollment is the status quo, and law schools can look forward to a glimmer of hope for some improvement in 2018.

2017 law school enrollment: JD enrollment flat, nearly 1 in 7 are not in the JD program

The 2017 law school enrollment figures have been released, and they reveal flat JD enrollment and a sharp uptick in non-JD enrollment.

In contrast, total JD enrollment is at its lowest point since 1974, when 105,708 students were enrolled in just 157 ABA-accredited law schools. Enrollment dropped slightly from last year, down to 110,156.

1L enrollment is actually slightly up, from 37,107 last fall to 37,398 this year. It's the fourth straight year of enrollment in the 37,000-range.

Earlier I predicted that non-JD legal enrollment would decline this year due to uncertainty in immigration and travel rules from the new presidential administration. That is emphatically not the case. Instead, there's a whopping 20% increase in non-JD enrollment, from 13,677 in 2016 to 16,428 this fall. Perhaps some of this arises from the jump in non-JD online degrees, particularly "masters of legal studies"-type degrees.

The growth has been explosive in recent years. When coupled with the decline and flattening of JD enrollment, the relative figures are, in my view, staggering. 13% of all students enrolled in law schools are not a part of a JD program--nearly 1 in 7 students. That's up from 11% last year, 10.3% in 2015, and 9.1% in 2017.

I've earlier wondered about a coming reckoning for non-JD legal education, a market largely unregulated by the American Bar Association and with essentially no disclosure of student inputs or outcomes. And I wonder how long this trajectory might continue.

In light of this enrollment data, I'll shortly project some things about the Class of 2018.

Is the ABA any good at evaluating judicial nominees?

The American Bar Association ("ABA") has long been evaluating federal judicial nominees, and it has received some scrutiny for how it goes about doing so. There have been empirical studies to show that Republican-nominated judicial candidates tend to receive lower scores than Democratic-nominated candidates, studies which admittedly offer their own limitations.

...as an aside, I've also found it interesting to dig through the ratings of those who appeared on President Donald Trump's "list" of prospective Supreme Court nominees:

Brett Kavanaugh: revised rating Q (sm), WQ (min) (backstory on downgraded rating here)

Thomas Hardiman: Q (sm), NQ (min) [on nomination to Third Circuit, WQ (1 abstention)]

Raymond Kethledge: WQ (sm), Q (min)

Amul Thapar: WQ (1 abstention)

Diane Sykes: WQ (sm), Q (min)

Steven Colloton: Q (sm), WQ (min), NQ (min)

Raymond Gruender: Q

Neil Gorsuch: WQ

Timothy Tymkovich: Q (sm), NQ (min)

Bill Pryor: Q (sm), NQ (min)

Federico Moreno: Q

To be fair, there are different traits that might make one a good district court judge, appellate judge, and Supreme Court justice. But it's worth noting, I think, that the very candidates a Republican presidential administration considers as most worthy of a Supreme Court nomination received, on the whole, fairly middling grades from the ABA.

Back to the topic at hand. I want to set those aside for a moment these debates, and look instead at something else. Is the ABA any good at doing what it purports to do?

As the Standing Committee on the Federal Judiciary reports, "the Committee focuses strictly on professional qualifications: integrity, professional competence and judicial temperament." The goal is to "ensure that the most qualified persons serve on the federal judiciary."

Is the ABA any good at that?

In part, that's because the ABA is typically looking backward at a candidate's record, then trying to project it forward to how the ABA believes that person will behave as a judge. It might be the case that past performance is an indicator of future success, but it also might be the case that the ABA is relying on weak measures of "qualifications."

One problem is the "rating" system itself, which lacks any nuance and instead offers the kind of thumbs-up/thumbs-down (and thumbs-sideways) of a movie review. Yes, there are probably several minutes of thoughtful film commentary that could precede that final rating, but, here, the ABA actually leaves all that commentary on the cutting room floor. All we have are opaque inputs and a single output.

One of the criteria that the ABA uses is "experience," and it includes some hard-and-fast proxies for experience: "The Committee believes that a prospective nominee to the federal bench ordinarily should have at least twelve years’ experience in the practice of law." This isn't a terribly thoughtful criterion, even if it has the advantage of being a fairly clear rule. That said, one would be hard-pressed to think a rule like this does very much to fill out the term "qualified" or "not qualified." After all, Roger Ebert might well have said, "If a movie comes in under an hour twenty, I give a thumbs down." But if we have no rush to get younger judges, then perhaps it's a fairly harmless criterion.

Additionally, the committee makes other kinds of ex ante determinations about what makes a good judge, like "substantial courtroom and trial experience as a lawyer or trial judge is important." These tend to skew the judiciary toward those with more practical experience, true; it also skews toward litigators and trial lawyers. For appellate judicial nominees, the ABA places "somewhat less emphasis on the importance of trial experience as a qualification for the appellate courts." It prizes certain types of experience: "While the Committee recognizes that civic activities and public service are valuable experiences for a prospective nominee, they are not a substitute for significant experience in the practice of law in either the private or public sector."

For those presidents who pre-screened their lists of applicants with the ABA, the results can be frustrating. President Barack Obama saw the ABA reject 14 of his prospective judicial nominees as "not qualified" his first three years in office. As Obama administration officials complained, "In particular, they have questioned whether the panelists — many of whom are litigators — place too much value on courtroom experience at the expense of lawyers who pursued career paths less likely to involve trials, like government lawyers and law professors."

Now, perhaps these ABA litigators are right, and perhaps their criteria are superior. Could that be measured? That would be a new and valuable place for future ABA studies. But that is also difficult to quantify. Allow me to offer a few thoughts.

First, we have a handful of notoriously bad-behaving judges we can examine.

Thomas Porteous was rating unanimously "qualified" (not "well qualified"), but he was impeached and removed for committing perjury by signing false financial disclosure forms and abusing his judicial office.

Samuel Kent was unanimously rated "well qualified," but he was impeached and later resigned from office for lying about sexual misconduct involving female employees.

Mark Fuller, in contrast, received a "qualified" rating with a minority "not qualified," before resigning after an investigation involving allegations about spousal abuse. (I should add, maybe this is a hard thing to measure ex ante....)

Second, I looked to a couple of the examples cited recently of more controversial nominees, and then I examined what litigants had to say about those judges in the Federal Judicial Almanac.

Roger Benitez received a substantial majority "not qualified," with a minority "qualified." Here's highlights from the Federal Judicial Almanac on him:

Alison Nathan received a majority "qualified," with a minority "not qualified." From the Almanac:

As a law professor who has to read (often inconsistent) student evaluations of my own performance each semester, I'm well aware of the limitations evaluations like these might present. But, in my view, they reflect, on the whole, that both judges are, with some possible weaknesses, capable and competent (even "qualified") judges. And, of course, perhaps someone will counter that using the Almanac has an entirely different set of flaws to evaluate judges.

Now, I have no idea how many Type I & Type II errors come out of the ABA judicial nominating process, at least to the regard that it's trying to anticipate who is "qualified" for the bench. I just cherry-picked a few examples, and I made no effort to dig deeper.

Additionally, these judges are hardly a random sample. They are selected, at times, to comport with the ABA criteria, and, with a couple of recent Republican administration exceptions, exclude candidates unanimously rated "not qualified." Other political reasons sink some nominees. Some items arise so far in the future that perhaps the ABA could not reasonably have been thought to be capable of evaluating that candidate's qualifications.

That said, I do think there would be tremendous value in examining the Federal Judicial Almanac entries of recent nominees and trying to compare them with ABA ratings. I assume, but perhaps I am wrong in the assumption, that the correlation between "qualified" ratings and the feedback from litigants is uneven. Additionally, I wonder if, over the course of a judge's career, the judge's capabilities (particularly instincts regarding exercises of discretion) improve to a degree that lessen the significance of any shortcomings. (Admittedly, learning on the job may be cold comfort to some early litigants!)

All this is to say, I wonder, setting aside the political critiques of the ABA at the moment, whether its evaluation process is even doing what it's designed to do in the first place.

A small data point on an emergency designation in a California election

Emergencies and elections don't get along well. The threat of emergencies, like acts of terrorism or massive weather events, remain a concern in elections, but we seem to lack many structures in place to handle such events.

Fires in Sonoma County, California prompted Governor Jerry Brown on October 19, 2017 to sign an executive order declaring an all mail ballot election for the November 7 election.

I was a bit skeptical of what I thought to be a fairly late emergency declaration and wondered how it might play out. Granted, it's an admittedly low-turnout election, but on the surface it appears that canceling in-person voting had a negligible impact, if any, on turnout.

The 2013 election had 6364 absentee and 1248 precinct ballots for 35.2% turnout. The 2015 election had 7003 absentee and 1235 precinct ballots for 33.0% turnout. Surely, a high percentage of voters already casting absentee ballots helps minimize any damage from canceling precinct voting. The final results this election were 6590 absentee for 31.2% turnout--a decline, but no bigger than the previous off-year-over-year decline.

I'm certainly not equipped to address matters like the competitiveness of the candidates or contentiousness of the issues or any of the many other confounding variables that could affect turnout. But, it's a small data point to consider in the larger scheme of thinking about how to handle emergencies and elections.