New draft posted on SSRN: "Weaponizing the Ballot"

I’ve just posted a draft of an article, Weaponizing the Ballot, on SSRN here. From the abstract:

States are considering legislation that would exclude presidential candidates from appearing on the ballot if they fail to disclose their tax returns. These proposals exceed the state’s power under the Elections Clause and the Presidential Electors Clause. States have no power to add qualifications to presidential or congressional candidates. But states do have constitutional authority to regulate the manner of holding elections and to direct the manner of appointing presidential electors. Manner regulations that relate to the ballot are those that affect the integrity and reliability of the electoral process itself or that require a preliminary showing of substantial support. In other words, they are procedural rules to help voters choose their preferred candidate. Tax disclosure requirements are not procedural election rules, which means they fall outside the scope of the state’s constitutional authority to administer federal elections and are unconstitutional.

And from the introduction:

This Article makes three principal contributions to help understand the scope of state authority to regulate access to the ballot in federal elections. First, while states may not add qualifications to candidates seeking federal office, this Article finds that “manner” regulations may at times legitimately affect the ability of candidates to win office. Second, this Article defines the constitutional scope of “manner” rules as election process rules, and it synthesizes alternative judicial formulations of state power over the “manner” of holding elections as variations of this definition. Third, this Articles applies this definition to proposals that compel disclosure of information as a condition of ballot access—applied here to tax returns, but applicable to other disclosures like medical records or school transcripts—and finds that they exceed the state’s power to regulate the manner of holding elections.

I’m pleased to share this major work, which builds off ideas I floated in a New York Times op-ed many months ago, and which builds off my scholarship thinking about state control over ballot rules generally and review of qualifications of candidates for federal office.

"Nonjudicial Solutions to Partisan Gerrymandering"

I have posted this draft of an Essay forthcoming in the Howard Law Journal, “Nonjudicial Solutions to Partisan Gerrymandering.” Here is the abstract:

This Essay offers some hesitation over judicial solutions to the partisan gerrymandering, hesitation consistent Justice Frankfurter’s dissenting opinion in Baker v. Carr. It argues that partisan gerrymandering reform is best suited for the political process and not the judiciary. First, it traces the longstanding roots of the problem and the longstanding trouble the federal judiciary has had engaging in the process, which cautions against judicial intervention. Second, it highlights the weaknesses in the constitutional legal theories that purport to offer readily-available judicially manageable standards to handle partisan gerrymandering claims. Third, it identifies nonjudicial solutions at the state legislative level, solutions that offer more promise than any judicial solution and that offer the flexibility to change through subsequent legislation if these solutions prove worse than the original problem. Fourth, it notes weaknesses in judicial engagement in partisan gerrymandering, from opaque judicial decisionmaking to collusive consent decrees, that independently counsel against judicial involvement.

This Essay is a contribution to the Wiley A. Branton/Howard Law Journal Symposium, "We The People? Internal and External Challenges to the American Electoral Process." is

Comparing Google Scholar's H5 index to Sisk-Leiter citations

After the latest release of Professor Greg Sisk’s scholarly impact measure for law school faculties, Professor Brian Leiter blogged a series of smaller rankings of individual faculty members in different scholarly areas. I thought I’d use the data for a quick look at the difference between measures of scholarly activity. The Sisk-Leiter method is this longstanding project; I thought I’d compare it to Google’s H5 index.

One major barrier to using Google Scholar is that it only works for those who create an account (absent using a time consuming back channel like Publish or Perish). But the two measures do different things.

Google Scholar index covers more works, including far more non-law-related works, than the Sisk-Leiter methodology. Google Scholar includes a number of non-legal and interdisciplinary works. It's a value judgment as to which metric ought to matter--or, perhaps, it's a reason to consider both and acknowledge they measure different things!

Google Scholar gives "credit" for an author being cited multiple times in a single piece; Sisk-Leiter only gives "credit" for one mention. The downside for Sisk-Leiter is that an author who has 12 of her articles published would receive credit in Google Scholar for 12 citations, but only 1 in Sisk-Leiter. On the flip side, an author who cites himself 12 times in a single piece would receive credit in Google Scholar for 12 citations, but only 1 in Sisk-Leiter--and, I think, self-citations are, on the whole, less valuable when measuring "impact."

Google Scholar covers all authors; Sisk-Leiter excludes names omitted in et al. There is a method to help mitigate this concern, but, again, this tends to benefit interdisciplinary scholars in Google Scholar, and tends to benefit (through omission) the more typical sole-author law pieces in Sisk-Leiter. That said, Professor Leiter updated his blog’s rankings with some corrections from Professor Ted Sichelman.

Google Scholar includes references to indexed recognized scholarship; Sisk-Leiter extends to all mentions, including blog posts or opinion pieces typically not indexed in Google Scholar. It's another value judgment as to which metric ought to matter. In this dimension, Sisk-Leiter can be broader than Google Scholar might be.

Sisk-Leiter offers a greater reward for a few highly-cited works; H5 offers a greater reward for breadth and depth of citations. This is a specific measure for H5 in Google Scholar as opposed to Google Scholar more generally. Google Scholar also measures citations in the last five years. But I chose to compare Sisk-Leiter to the Google H5 index instead of the C5 (citations in the last five years) index. H5 measures how many (X) pieces have received at least X citations in the last 5 years. So if you have 10 articles that have each received at least 10 citations since 2013, your H5 index is 10. It doesn’t matter if your 11th piece has 9 citations; it doesn’t matter if one of your 10 pieces has 10,000 citations. It’s a measure of depth and breadth, different in kind than total citations.

In the chart below, I logged the Sisk-Leiter citations and compared them to the Google H5 index. I drew from about 85 scholars who both appeared in a Leiter rankings and had a public Google Scholar profile, and I looked at their Google Scholar profiles this fall (which may mean that figures are slightly off from today’s figures). Google Scholar is also only as good as the profiles are, so if scholars have failed to maintain their profile with recent publications, it may understate their citations. I highlighted in blue circles those identified in the Leiter rankings as age 50 and under.

I included a trendline to show the relationship between the two sets of citations. Those “above” the line are those with higher Sisk-Leiter scores than Google H5 index scores and “benefit", in a sense from the use of this metric over Google H5. Those “below” the line, in contrast, are those who would “benefit” more from the use of Google H5. At a glance, it’s worth considering that perhaps more “pure law” scholars are above the line and more interdisciplinary scholars below it—not a judgment about one or the other, and only a great generalization, but one way of thinking about how we measure scholarly impact, and perhaps reflects a benefit of thinking more broadly about faculty impact. Recall, too, that this chart selectively includes faculty, and that some citation totals vary wildly due to the particular fields scholars write in. The usual caveats about the data apply—there are weaknesses to every citation metric, and this is just a way of comparing a couple of them.

"Cyber Interference in Elections and Federal Agency Action"

I have this piece up at the Harvard Law Review Blog, Cyber Interference in Elections and Federal Agency Action. It begins:

Pop quiz: which part of the federal government is tasked with preventing cyber interference in our elections?

Congress has refused to say. We have reached a point of a significant gap between an important federal need and existing federal power. And in the absence of that federal power, federal agencies have stepped into the gap and extended their authority into domains unanticipated by Congress.

Forthcoming article: "The Democracy Ratchet"

Over at SSRN, I've posted a draft of The Democracy Ratchet, forthcoming in the Indiana Law Journal. Comments welcome! The abstract:

Litigants seeking to lift burdens on the right to vote and judges adjudicating these claims have an unremarkable problem—what is the benchmark for measuring the nature of these burdens? Legal theories abound for claims under the constellation of rights known as the "right to vote." And when a legislature changes a voting practice or procedure, courts may have an easy benchmark—they can consider what the right to vote looked like before and after the enactment of the new law, and they can evaluate a litigant’s claim on that basis. Recently, federal courts have been relying on this benchmark for the principal causes of action litigants might raise after a new law has been enacted—a Section 2 challenge under the Voting Rights Act, a freedom of association claim subject to the Burdick balancing test, and an Equal Protection analysis derived from Bush v. Gore. And frequently, courts have found that new laws that eliminate once-available voting practices or procedures fail.

I describe this new practice as the Democracy Ratchet. But it is only recently that a convergence of factors have driven courts to (often unwittingly) adopt the Democracy Ratchet more broadly. So while a legislature can expand such opportunities, courts scrutinize cutbacks on such opportunities with deep skepticism—deeper than had no such opportunity ever existed. The ratchet tightens options, squeezing the discretion that legislatures once had.

This Article seeks to solve the puzzle of how courts have scrutinized, and should scrutinize, legislative changes to election laws. Part I identifies recent instances in which federal courts have invoked a version of the Democracy Ratchet. It identifies the salient traits of the Democracy Ratchet in these cases. Part II describes why the Democracy Ratchet has gained attention, primarily as a tactic of litigants and as a convenient benchmark in preliminary injunction cases. Part III examines of the history of the major federal causes of action concerning election administration—Section 2 of the Voting Rights Act, the Burdick balancing test, and the Equal Protection Clause. In each, it traces the path of the doctrine to a point where a version of the Democracy Ratchet might be incorporated into the test. It concludes that these causes of action do not include a substantive Democracy Ratchet. Part IV turns to determine how the Democracy Ratchet might be used. It concludes that the Democracy Ratchet is best identified as an evidentiary device and a readily-available remedy for courts fashioning relief. It then offers suggestions for its appropriate use. Part V identifies some concerns with existing use of the Democracy Ratchet and instances in which it may be incorrectly used. It offers guidance for courts handling changes to election laws. Part VI concludes.

Which Supreme Court justices are the topic of the most academic articles?

A recent draft article about Justice Kennedy's influence and legacy sparked a social media discussion about which justices attract the most academic attention.

I looked at the Westlaw database and searched for "ti(justice /2 [lastname])." The "ti()" field is slightly broader than just looking at the title of the article alone, but for this purpose it captured almost exclusively articles with a justice's name in the title. I didn't distinguish between Chief Justice Rehnquist's time as an Associate Justice (with that title), but I added a "chief" to my Roberts search to separate out hits for Justice Owen Roberts (there was at least one...). This search also would typically remove results for "the Rehnquist Court" or "the Roberts Court," which are less about the chief justices in particular, but it may slightly undercapture articles about those two justices. UPDATE: This methodology somewhat undercaptures references to justices that are more colloquial (e.g., use just the last name without the title "justice") or include the justice's name as an author without a title in a book review, but it eliminates far more false positives for most other justices who have more common names than "Scalia," "Alito," and "Sotomayor."

I imagine there might also a logarithmic effect one might observe--or expect to observe--over the course of a justice's career. As a justice begins, few, if any, articles will be written about her; as her influence increases over time, we would expect to see more articles each year than the previous year. (There may also be separate bursts of scholarship around a justice's retirement and around a justice's death.) This metric is static and treats each year as the same--perhaps someone would spend more time analyzing year-by-year impact! Additionally, the increase in the volume of journals, particularly online journals available on Westlaw, may skew results for justices with more recent histories.

I narrowed my search to the "Law Reviews & Journals" database, which is broad enough to include some practitioners' publications and the ABA Journal but should work for a rough examination of justices. I then developed a charge, "Law Journal Article Title Mentions Per Year," with the denominator the years since that justice first joined the Supreme Court. I selected all of the most recent 13 justices (excluding Justice Gorsuch) to have served on the Court.

UPDATE: I transposed the dates for Justices Sotomayor and Kagan in an earlier version, and their data changed slightly.

I then added in Justice Gorsuch and looked simply at the raw citation totals, regardless of years' services.

I'll leave it to others to discern what these figures might mean, if anything. I'll note that Justice Scalia dwarfs all others, which was not surprising, but a few other results did mildly surprise me. A few pieces about Justice Gorsuch were apparently written in his days as a prospective, or actual, nominee, then converted into pieces with a title about what "Justice Gorsuch," rather than "Judge Gorsuch," has written or said in the past--one reason his number is at 5 after one years' service, to Justice Kagan's 4.

UPDATE: Dave Hoffman rightly points out that there are likely tens of thousands of law journal articles published every single year. Due to the search function in Westlaw, I could not even limit a search of articles mentioning "law" in a given year, because results cannot exceeds 10,000 hits, and I can't narrow the search database to Law Reviews & Journals in WestlawNext. Articles about a Supreme Court justice, then, are a tiny slice of all scholarship. UPDATE: And, of course, this is only a very crude metric that is assuredly overinclusive and underinclusive. But, the relative relationships between justices should be modestly illuminating.

New essay draft: "Legal Quandaries in the Alabama Senate Election of 2017"

I have posted a new essay forthcoming in the Alabama Law Review, entitled Legal Quandaries in the Alabama Senate Election of 2017. Here is the abstract:

President Donald Trump’s decision to nominate Alabama Senator Jeff Sessions as his Attorney General resulted in a vacancy in the Senate and triggered a special election. The special election, however, revealed the many complexities of the Seventeenth Amendment, special elections generally, and Alabama state law specifically.

This Article traces a series of legal quandaries that arose from the special election, some of which remain open questions for future Alabama elections, and for United States Senate elections more generally. Part I examines the scope of the Alabama Governor’s power to call for a special election under the Seventeenth Amendment and state law. Part II scrutinizes the complications for replacing a late-withdrawing candidate and for counting votes cast for a candidate who resigns. Part III identifies proposed gambits, from postponing the election to write-in campaigns, that never came to fruition. Part IV examines the timing surrounding certification of election results in Alabama. Part V looks at gaps in Alabama’s recount and election contest procedures. Finally, Part VI identifies the most significant opportunities to clarify Alabama law and to properly interpret the Seventeenth Amendment to avoid uncertainty in future elections.

I have a very short turnaround before submitting the final draft for editing, but I welcome any comments or feedback!

Draft work in progress: "The High Cost of Lowering the Bar"

My colleague Rob Anderson and I have posted a draft article, The High Cost of Lowering the Bar on SSRN. From the abstract:

In this Essay, we present data suggesting that lowering the bar examination passing score will likely increase the amount of malpractice, misconduct, and discipline among California lawyers. Our analysis shows that bar exam score is significantly related to likelihood of State Bar discipline throughout a lawyer’s career. We investigate these claims by collecting data on disciplinary actions and disbarments among California-licensed attorneys. We find support for the assertion that attorneys with lower bar examination performance are more likely to be disciplined and disbarred than those with higher performance.

Although our measures of bar performance only have modest predictive power of subsequent discipline, we project that lowering the cut score would result in the admission of attorneys with a substantially higher probability of State Bar discipline over the course of their careers. But we admit that our analysis is limited due to the imperfect data available to the public. For a precise calculation, we call on the California State Bar to use its internal records on bar scores and discipline outcomes to determine the likely impact of changes to the passing score.

We were inspired by the lack of evidence surrounding costs that may be associated with lowering the "cut score" required to pass the California bar, and we offered this small study as one data point toward that end. The Wall Street Journal cited the draft this week, and we've received valuable feedback from a number of people. We welcome more feedback! (We also welcome publication offers!)

The paper really does two things--identifies the likelihood of discipline associated with the bar exam score, and calls on the State Bar to engage in more precise data collection and analysis when evaluating the costs and benefits of changing the cut score.

It emphatically does not do several things. For instance, it does not identify causation and identifies a number of possible reasons for the disparity (at pp. 12-13 of the draft). Additionally, it simply identifies a cost--lower the cut score will likely increase attorneys subject to discipline. It does not make any effort to weigh that cost--it may well be the case that the State Bar views the cost as acceptable given the trade-off of benefits (e.g., more attorneys, more access to justice, etc.) (see pp. 11-12 of the draft). Or it might be the case that the occupational licensing of the state bar and the risk of attorney discipline should not hinge on correlation measures like bar exam score.

There are many, for instance, who have been thoughtfully critically of the bar exam and would likely agree that our findings are accurate but reject that they should be insurmountable costs. Consider thoughtful commentary from Professor Deborah Jones Merritt at the Law School Cafe, who has long had careful and substantive critiques about the use of the bar exam generally.

It has been our hope that these costs are addressed in a meaningful, substantial, and productive way. We include many caveats in our findings for that reason.

Unfortunately, not everyone has reacted to this draft that way.

The Daily Journal (print only) solicited feedback on the work with a couple of salient quotations. First:

Bar Trustee Joanna Mendoza said she agreed the study should not be relied on for policy decisions.

“I am not persuaded by the study since the professors did not have the data available to prove their hypothesis,” she said.

We feel confident in our modest hypothesis--that attorneys with lower bar exam scores are subject to higher rates of discipline. We use two methods to support this. We do not have individualized data that would allow us the precision of measuring the precise effect, but we are confident in this major hypothesis.

Worse, however, is the disappointing answer. Our draft expressly calls on the State Bar to study the data! While we can only roughly address the impact at the macro level, we call on the bar to use data for more precise information! We do hope that the California State Bar would do so. But it appears it will not--at least, not unless it has already planned on doing so:

Bar spokeswoman Laura Ernde did not directly address questions about the Pepperdine professors’ study or their call for the bar to review its internal data, including non-public discipline. Ernde wrote in an email that the agency would use its ongoing studies to make recommendations to the Supreme Court about the bar exam.

Second are the remarks from David L. Faigman, dean of the University of California Hastings College of Law. Dean Faigman has been one of the most vocal advocates for lowering the cut score (consider this Los Angeles Times opinion piece.) His response:

Among his many critiques, Faigman said the professors failed to factor in a number of variables that impact whether an attorney is disciplined. 

“If they were to publish it in its current form, it would be about as irresponsible a product of empirical scholarship I could imagine putting out for public consumption,” Faigman said. “God forbid anybody of policy authority should rely on that manuscript.”

It's hard to know how to address a critique when the epithet "irresponsible" is the substance of the critique.

We concede many variables that may cause attorney discipline (pp. 12-13), and the paper makes no attempt to address that. Instead, we're pointing out that lower bar scores correlate with higher discipline rates; and lowering the score further would likely result in still higher discipline rates. Yes, many factors go into discipline--but the consequence of lowering the cut score will still remain, a consequence of higher discipline.

And our call for policy authorities to "rely" on the manuscript is twofold--to consider that there are actual costs to lowering the cut score, and to use more data to more carefully evaluate those costs. Both, I think, are valuable things for a policy authority to "rely" upon.

We hope that the paper sparks a more nuanced and thoughtful discussion than the one that has been waged in lobbying the State Bar and state legislature so far. We hardly know what the "right" cut score is, or the full range of costs and benefits that arise at varying changes to the cut score of the bar exam. But we hope decisionmakers patiently and seriously engage with these costs and benefits in the months--and, perhaps ideally, years--ahead.