Fictional Attorney of the Month: Peter Banning

Robin Williams portrays Peter Banning, a successful corporate attorney visiting family in London when mysterious events disrupts his otherwise-orderly routine. So opens Hook, a 1991 retelling of the story of Peter Pan, where Peter has left the lost boys in Neverland, grown up, and become about as far from childlike ways as can be imagined.

Mr. Banning has forgotten this past and is rudely reintroduced to it as his children are kidnapped and taken away, interrupting his "vacation," or, really, his attempts to work remotely in the early 90s with attorneys across the Atlantic. He is about as humorless as one might expect an overworked corporate lawyer to be. He introduces himself to Captain Hook as "Peter Banning, Attorney at Law." (That's a real sign of confidence.) He caps a series of insults with Rufio, interim leader of the Lost Boys, with the closing remark, "Don't mess with me man, I'm a lawyer!"

But it's perhaps the restyling of Shakespeare's cry about killing lawyers that is the most memorable moment of this lawyer being dropped into Neverland and experiencing a world that is not as impressed with his credentials and qualifications as he is. And he's this month's Fictional Attorney of the Month.

No, the MBE was not "harder" than usual

I frequently read comments, on this site and others, commenting that the bar exam was simply harder than usual. Specifically, I read many people, often law faculty (who didn't take the exam this year) or recent graduates (the vast majority of whom are taking the bar exam for the first time), insisting that the bar, especially the Multistate Bar Exam ("MBE"), is "harder" than before.

Let's set aside, for now, and briefly, (1) rampant speculation, (2) cognitive biases suggesting that the instance in which someone is taking a multiple choice test that counts for something feels "harder" than ungraded practice, (3) erroneous comparisons between the MBE and bar prep companies, (4) retroactive fitting of negative bar results with negative bar experiences, or (5) the use of comparatives in the absence of a comparison.

Let's instead focus on whether the July 2015 bar exam was "harder" than usual. The answer is, in all likelihood, no--at least, almost assuredly, not in the way most are suggesting, i.e., that the MBE was harder in such a way that it resulted in lower bar passage rates.

I'll explain why this is the right question, and why I include the caveat "almost assuredly," below. First, it might be beneficial to take a moment to explain how the MBE is scored, and how that should factor into an analysis.

I. What the MBE scale is

Many are familiar with a "curved" exam, either from college or law school. The MBE is not curved. (For that matter, neither is the LSAT.)

In college, letter grades are commonly assigned based on converting numeric scores to a letter (e.g., 90-100 is an A, 80-89 is a B, etc., with some gradations for +'s and -'s). A common way of curving the exam is to add points to the top grade in the class to make it 100, and add the same number of points to everyone else's score. If the highest grade on the exam is a 92, then everyone gets an additional 8 points. If the highest grade is a 98, everyone gets an additional 2 points (and most classmates complain that this student "wrecked the curve"). This isn't really a "curve" in the typical use of the term, but it's a common way of distributing grades.

Instead, most law schools "curve" grades based on a pre-determined distribution of grades. Consider the University of California-Irvine. In a class of, say, 80 students, instructors are required to give 3 or 4 A+'s; 19%-23% of the next highest grades are A's; 19-23% of the next highest grades are A-'s; and so on.

But the MBE uses neither of these. The MBE uses a process known as "equating," then "scales" the test. These are technical statistical measures, but here's what it's designed to do. (Let me introduce an important caveat here: the explanations are grossly oversimplified but contain the most basic explanations of measurement!)

Imagine we have two groups of students. They are taking a test, but on different days. And we don't want to give them the exact same test, because, well, that's a bad idea--the second group might get answers from the first group. But we want to be able to compare the two groups of students to each other.

It wouldn't really do to use our law school "curve" above. After all, what if the second group is much smarter than the first group? If we, say, had a 75% pass rate, why should the second group be penalized for taking the test among a much smarter group, when their chances would have been better the first time around?

Standardized testing needs a way of accounting for this. So it does something called equating. It uses versions of questions from previous administrations of the exam, known as "anchor" questions or "equators." It then uses these anchor questions to compare the two different groups. One can tell if the second group performed better, worse, or similarly on the anchor questions, which allows you to compare groups over time. It then examines how the second group did on the new questions. It can then better evaluate performance on those new questions by scaling the score based on the performance on the anchor questions.

This is why the bar jealously guards its exam questions and why there is such tight security around the exam. It needs some of the questions to compare groups from year to year. But as the law changes, or simply to keep the test relatively fresh, there are always new questions introduced into the exam.

II. How the MBE scale works

It's one thing to read the math--yes, you might think, there's some magic that standardized test administrators have, but it's still a challenge to understand. How does it work?

Consider two groups of similarly-situated test-takers, Group A and Group B. They each achieve the same score, 15 correct, on a batch of "equators." But Group A scores 21 correct on the unique questions, while Group B scores just 17 right.

We can feel fairly confident that Groups A and B are of similar ability. That's because they achieved the same score on the anchor questions, the equators that help us compare groups across test administrations.

And we can also feel fairly confident that Group B had a harder test than Group A. (Subject to a caveat discussed later in this part.) That's because we would expect Group B's scores to look like Group A's scores because they are of a similar capability. Because Group B performed worse on unique questions, it looks like they received a harder batch of questions.

The solution? We scale the answers so that Group B's 17 correct answers look like Group A's 21 correct answers. That accounts for the harder questions. Bar pass rates between Group A and Group B should look the same.

In short, then, it's irrelevant if Group B's test is harder. We'll adjust the results because we have a mechanism designed to account for variances in the difficulty of the test. Group B's pass rate will match Group A's pass rate because the equators establish that they are of similar ability.

When someone criticizes the MBE as being "harder," in order for that statement to have any relevance, that person must mean that it is "harder" in a way that caused lower scores; that is not the case in typical equating and scaling, as demonstrated in this example.

Let's instead look at a new group, Group C.

On the unique questions, Group C did worse than Group A (16 right as opposed to 21 right), much like Group B (17 to 21). But on the equators, the measure for comparing performance across tests, Group C also performed worse, 13 right instead of Group A's 15.

We can feel fairly confident, then, that Group C is of lesser ability than Group A. Their performance on the equators shows as much.

That also suggests that when Group C performed worse on unique questions than Group A, it was not because the questions were harder; it was because they were of lesser ability.

There are, of course, many more nuanced ways of measuring how different the groups are, examining the performance of individuals on each question, and so on. (For instance, what if Group C also got harder questions by an objective measure--as in, Group A would have scored the same score as Group C on the uniques if Group A answered Group C's uniques? How can we examine the unique questions independent of the equators, in the event that the uniques are actually harder or easier?) But this is a very crude way of identifying what the bar exam does. (For all of the sophisticated details, including how to weigh these things more specifically, read up on Item Response Theory.)

So, when the MBE scores decline, they decline because the group, as a whole, has performed worse than the previous group. And we can measure that by comparing their performance on similarly-situated questions.

III. Did something change test-taker performance on the MBE?

The only way to say that the failure rate increased because of the test would be because of a problem with the test itself. It might be, of course, that the NCBE created an error in its exam by using the wrong questions or scoring it incorrectly, but there hasn't been such an allegation and we have no evidence of that. (Of course, we have little evidence of anything at all, which may be an independent problem.)

But this year, some note, is the first year at the MBE includes seven multistate bar exam subjects instead of six. Civil Procedure was added as a seventh subject in the February 2015 exam.

Recall how equating works, however. We equate similar questions given to the same groups of test-takers. That means, Civil Procedure questions were not used to equate the scores. If the Civil Procedure questions were more challenging, or if students performed worse on those questions than others, we can always go back and see how they did on the anchor questions. Consider Groups A & B above: if they are similarly skilled test-takers, but Group B suffered worse scores on the uniques because of some defect in the Civil Procedure questions, then scaling will cure those differences, and Group B's scores will be scaled to reflect similar scores to Group A.

Instead, a "change" in the bar pass rates derived from the exam itself must affect how one performs on both the equators and the uniques.

The more nuanced claim is this: Civil Procedure is a seventh subject on the bar exam. Law students must now learn an additional subject for the MBE. Students have limited time and limited mental capacity to learn and retain this information. The seventh subject leads them to perform worse than they would have on the other six subjects. That, in turn, causes a decline in their performance on the equators relative to previous cohorts. And that causes an artificial decline in the score.

Maybe. But there are several factors suggesting (note, not necessarily definitively!) this is not the case.

First, this is not the first time the MBE has added a subject. In the mid-1970s, the 200-question MBE consisted of just five subjects: Contracts, Criminal Law, Evidence, Property, and Torts. (As a brief note, some of these subjects may have become easier; indeed, the adoption of the Federal Rules of Evidence simplified the questions at a time when the bulk of the evidentiary questions were based on common law. But, of course, there is more law today in many of these areas, and perhaps more complexity in some of them as a result.)

By the mid-1970s, the NCBE considered adding some combination of Civil Procedure, Constitutional Law, and Corporations to the MBE set. It ultimately settled on Constitutional Law, but not after significant opposition. Indeed, some even went so far as to suggest that it was not possible to draft objective-style multiple choice questions on Constitutional Law. (I've read through the archives of the Bar Examiner magazine from those days.) Nevertheless, the MBE plunged ahead and added a sixth subject in Constitutional Law. There was no great outcry about changes in bar pass rates or inabilities of students to handle a sixth subject; there was no dramatic decline in scores. Instead, Constitutional Law was, and now is, deemed a perfectly ordinary part of the MBE, with no complaints that the addition of this sixth subject proved overwhelming. The practice of adding a subject to the MBE is not unprecedented.

Furthermore, it's an overstatement to say that the MBE now includes a seventh subject when all bar exams (to my knowledge) previously tested Civil Procedure. Yes, the testing occurred in the essay components rather than the multiple choice components, but students were already studying for it, at least somewhat, anyway. And in many jurisdictions (e.g., California), it was federal Civil Procedure that was tested, not state-specific.

Finally, Civil Procedure is a substantial required course at (I believe) every law school in America--the same cannot be said, at the very least, of Evidence and of Constitutional Law. To the extent it's something students need to learn, they are, generally, already required to learn it for the bar, and they already have learned it in law school. (Retention or comprehension, of course, are other matters.)

These arguments are not definitive. It may well be the case that they are wrong and that Civil Procedure is a kind of disruptive subject sui generis. But it points to a larger issue that such arguments are largely speculation, ones that require more evidence before gaining confidence that Civil Procedure is (or is not) responsible, in any meaningful measure, to the lower MBE scores.

We do, however, have two external factors that predict the decline in MBE scores, and ones that suggest that the decline in student quality rather than a more challenging set of subjects is responsible for the decline in scores. First, law schools have been increasingly admitting--and, subsequently, increasingly graduating--students with lower credentials, including lower undergraduate grade point averages and lower LSAT scores. Jerry Organ has written extensively about this. We should expect declining bar pass rates as law schools continue to admit, and graduate, these students. (The degree of decline remains a subject of some debate, but a decline is to be expected.)

Second, the NCBE has observed a decline in MPRE scores. Early and more detailed responses from the NCBE revealed a relatively high correlation between MPRE and MBE scores. And because the MPRE is not subject to the same worries about changes in subject matter tested, its predictive value is beneficial to examine whether one would expect a particular outcome in the MBE.

IV. Some states are making the bar harder to pass--by raising the score needed to pass

Illinois, for instance, has announced that it will increase the score needed to pass the exam. When it adopted the Uniform Bar Exam, Montana decided to increase the score needed to pass.

These factors are unrelated to the changes in MBE scores. We might expect pass rates to decline. And we might attribute that decline to something other than the MBE scores.

And it actually raises a number of questions for those jurisdictions. Why is the pass score being increased? Why did the generation of lawyers who passed that bar and who were deemed competent, presumably, to practice law conclude that they needed to make it a greater challenge for Millennials? Is there evidence that a particular bar score is more or less effective at, say, excluding incompetent attorneys, or minimizing malpractice?

These are a few of the questions one might ask about why one may want a bar exam at all--its function, or its role as gatekeeper, and so on. And it's a question about the difficulty of passing the bar, which is a distinct inquiry from the question about the difficulty of the MBE questions themselves.

V. Concluding thoughts

Despite some hesitation or tentative conclusions offered, I'll restate something I began with: "Let's instead focus on whether the July 2015 bar exam was 'harder' than usual. The answer is, in all likelihood, no--at least, almost assuredly, not in the way most are suggesting, i.e., that the MBE was harder in such a way that it resulted in lower bar passage rates."

We can see that the MBE uses Item Response Theory to account for variances in the test difficulty, and the NCBE scales scores to ensure that harder or easier questions do not affect the outcome of the test. We can also see that merely adding a new subject, by itself, would not decrease scores. Instead, something would have to affect test-takers ability to an extent that it would make them perform worse on similar questions. And we have some good reasons to think (but, admittedly, not definitively, at least not yet) that Civil Procedure was not that cause; and some good reasons (from declining law school admissions standards on LSAT scores and UGPAs, and MPRE scores) to think that the decline is more related to the test-takers ability. More evidence and study is surely needed to sharpen the issues, but this post should clear up several points about MBE practice (in, admittedly, deeply, perhaps overly, simple terms).

Law schools ignore this to their peril. Blaming the exam without an understanding of how it actually operates masks the major structural issues confronting schools in their admissions and graduation policies. And it is almost assuredly going to get worse over each of the next three July administrations of the bar exam.

Bar exam scores hit 27-year low

After my early details about the dropping bar rates across most jurisdictions, Bloomberg reports that the scaled MBE score has dropped yet again, 1.6 points, to reach its lowest level since 1988.

Last year, I blogged about the drop in MBE scores, the 200-question multiple choice test that serves as the primary objective scale for score for bar passage, noting that it was the single-largest drop in MBE history. This year's decline is by no means historic, but it is the among the lowest scores in the history of the test. I've updated charts from last year here.

Year-over-year LSAT test-takers up a little, or down a little, or up somewhat

The Law School Admissions Council ("LSAC") is in the business of, among other things, administering the Law School Admissions Test ("LSAT"). Shortly after each administration, which it offers four times a year, it offers a total of LSAT test-takers for that administration. This is a pretty easy number to understand. But it conceals a lot of data.

Baseball stat geeks are acutely aware of this phenomenon in other contexts. "Batting average," for instance, is a popular way of measuring a batter's productivity, which measures the number of hits a batter has over his at-bats. But the measure conceals a lot of information about the batter's productivity. It equates singles and home runs, the latter being far more valuable. It ignores walks, a positive outcome for a batter. There are, perhaps, more valuable or useful metrics to evaluate a batter's quality, if only one can look inside the data.

Reading that top line, you see that overall LSAT takers are up 6.6% over last year. But there are other numbers to look at, too, which LSAT distributes via PDF to law schools but does not include in its top-line data on its website.

For instance, LSAT administrations at U.S. regional test centers are up 7.9% June-over-June, but down 7.7% at Canadian test centers. (A handful of other international test centers exist, too.) That's probably slightly better news for law schools--the overwhelming number of matriculants to United States law schools take the LSAT in the United States.

Even within that 7.9% number, salient distinctions exist. For instance, first-time test-takers at U.S. regional test centers are up 8.0% June-over-June, whereas repeaters are up 7.7%. Perhaps not an overwhelming difference, but a difference to emphasize that first-time test-takers, probably the more important measure to evaluate new levels of interest in law schools, are slightly higher than the repeaters.

How these statistics correlate to the more meaningful measure, law school applicants, is another matter entirely. Last year, overall LSAT test-takers declined 9.1% June-over-June, including a 9.9% decline at U.S. regional centers and an 11.7% decline among first-time takers are U.S. regional centers. By year's end, LSAT test-takers increased year-over-year, and applicants declined just 1.8% year-over-year.

More granular data might indicate a more meaningful narrative about applicants this admissions cycle for the incoming Class of 2019. But, as is often the case, it's wait and see until the next data point arrives.

A tale of two law school applicant cycles

It may not be the best of times, but there is no question that today's prospective law school applicant is in a dramatically different position than the law school applicant of just four years ago. Gleaning data from LawSchoolNumbers (of course, with all the usual caveats that come with such data), I looked at the profiles of similarly-situated law school applicants applying to a similar set of law schools in the 2010-2011 and 2014-2015 application cycles. I included their self-reported (all the usual caveats) outcomes. (I found applicants with identical LSAT scores and similar UGPAs, but I ensured that if the UGPAs were different, the 2014-2015 applicants always had the slightly worse UGPA.) I anonymized the schools, even though they're easily discoverable, simply because the precise identities of each school don't matter terribly much; instead, the illustration of the dramatically different outcomes for similar-situated applicants four years apart stands alone. The dollar figure listed is the three-year scholarship offer.

YEAR 2010-2011 2014-2015
School W Rejected Waitlisted
School X Rejected Accepted, $30,000
School Y Rejected Accepted, $102,000
School Z Accepted Accepted, $102,000
YEAR 2010-2011 2014-2015
School J Rejected Accepted, $120,000
School K Waitlisted Accepted, $159,000
YEAR 2010-2011 2014-2015
School C Waitlisted Accepted
School D Rejected Accepted, $127,500
School E Accepted Accepted, $105,000
YEAR 2010-2011 2014-2015
School P Waitlisted Accepted
School Q Waitlisted Accepted, $48,000
School R Accepted, $25,000 Accepted, $132,000

UPDATE: For the methodology, yes, I simply found two similarly-situated applicants as best I could find. I excluded anyone with self-identified distinctive applicant profiles, such as under-represented minority or early action applicants, to minimize any distinctions between applicants.

July 2015 bar exam results again show declining pass rates almost everywhere: outliers, or a sign of more carnage?

This post has been updated.

Many speculated that the July 2014 bar passage results were anomalously low on account of some failure in the exam, either because of software glitches or because of some yet-undescribed problem with the National Conference of Bar Examiners and its scoring of the Multistate Bar Exam. Last October, I was among the first to identify the decline in scores last year, and my initial instinct caused me to consider that a problem may have occurred in the bar exam itself. Contrary evidence, however, led me to change my mind, and the final scores showed rather significant declines in all jurisdictions, in all likelihood, I concluded, based on a decline in law school graduate quality.*

It's quite early for the July 2015 bar exam results, but they are trickling in. In most of these jurisdictions, only the overall pass rate is available, even thought it's usually better to separate first-time test-takers from repeaters (and, even better, first-time test-takers who graduated from ABA-accredited law schools). In other jurisdictions, I use the best available data, which is sometimes second-hand (and I link all sources when available). Worse, many of these jurisdictions only list pass and fail identities, so I have to do the math myself, which increases the likelihood for error.

But looking at the NCBE statistics from last year, we can see another overall decline in scores almost across the board. And even in places where there was an uptick in pass rates--which, perhaps, suggest that things are not as dire as they appeared last year, where--they remain low compared to recent history. Assuming last year's exam was not an anomaly but the beginning of a trend, which I eventually came to agree was the best explanation given the evidence, these results are consistent with that assumption--with no ExamSoft fiasco to blame. The problem of lower standards at many law schools that began about four years ago appears to be coinciding with the decline of bar pass rates, in many jurisdictions to recent-past lows, and several jurisdictions experiencing double-digit drops in the pass rate.

As with last year, of course, we're looking at only a handful of early-reporting jurisdictions. The final scaled MBE score, when disclosed, should reveal a great deal of information, so projections from the trends of a few states should be treated with appropriate caution (and speculation).

UPDATE: The MBE scores have been released, and they are the lowest since 1988. You can see details here.

Change in overall bar pass rate, July 2014 over July 2015

Iowa, +5 points (July 2014: 81%; July 2015: 86%)

Kansas, -3 points (July 2014: 79%; July 2015: 76%)

New Mexico, -12 points (July 2014: 84%; July 2015: 72%)

North Carolina, -4 points** (July 2014: 71%; July 2015: 67%)

North Dakota, +6 points (July 2014: 63%; July 2015: 69%)

Oklahoma, -11 points (July 2014: 79%; July 2015: 68%)

Washington, -1 point (July 2014: 77%; July 2015: 76%)

West Virginia, -5 points (July 2014: 74%; July 2015: 69%)

Wisconsin, -10 points*** (July 2014: 75%; July 2015: 65%)

**denotes first-time test-takers, not overall rate. UPDATE: I relied on erroneous data from 2014; I've since updated the data.

***source via comments

It's worth noting that North Carolina's bar appears to have an unusually volatile pass rate. The first-time pass rate in July 2013 was 71%; that skyrocketed to 85% last year; and that plummeted back to 67% this year. UPDATE: This data was in error, see above.

Jurisdictions like North Dakota are incredibly small--just 62 people took the bar, which likely explains some of the great volatility in scores, as each test-taker represents almost 2 points in the overall pass rate. July 2013 had a 76% overall pass rate, which plunged to 63% last year and bobbed back up to 69% this year. But more importantly, their first-time pass rate increased 15 points, from 64% to 79%, which resembles the 81% first-time pass rate from July 2013.

I've also added a little historical perspective for these bar exams. I've added charts beside the table showing the overall July pass rate (in North Carolina's case, the first-time pass rate) since 2010. In many jurisdictions, this is a six-year low, and it might be the lowest in quite some time. In most jurisdictions, it's the lowest or second-lowest in the six-year window of data. (The charts are slightly deceptive because the axes all end near the bottom of the pass rate range and doesn't go all the way down to 0%; perhaps not obviously to all, most graduates still pass the bar in these jurisdictions, but the charts reflect the relative changes within a small band in recent years.)

*(As an important caveat, I recognize that there are many measures of "student quality" or "law school graduate quality," and that the bar exam is but one measure of that. But, assuming, which may be even too big an assumption for many, that the bar exam presents, very roughly, a proxy for those who have the minimum capability to practice law, and the pass rates continue to decline, then we can, very roughly, say that there has been a "decline" in "law school graduate quality," at least as evaluated by this one metric. Perhaps there are other metrics, or perhaps there are better metrics, but this is how I use the term here.)

Additional updates to this post will occasionally occur here.

Alabama, -5 points (July 2014: 65%; July 2015: 60%)

Connecticut: -2 points (July 2014: 77%; July 2015: 75%)

Florida, -3 points (July 2014: 72%; July 2015: 69%)

Idaho, +4 points (July 2014: 65%; July 2015: 69%)

Indiana, unchanged (July 2014: 72%; July 2015: 72%)

Louisiana, -8 points (July 2014: 70%; July 2015: 62%)

Mississippi, -27 points (July 2014: 78%; July 2015: 51%)

Missouri, -1 point (July 2014: 85%; July 2015: 84%)

Montana, -2 points (July 2014: 64%; July 2015: 62%)

Nevada, +2 points (July 2014: 58%; July 2015: 60%)

Oregon, -5 points (July 2014: 65%; July 2015: 60%)

Vermont, -14 points (July 2014: 66%; July 2015: 52%)

Can a state legislature delegate its power under the Elections Clause to another?

This is the second of two posts about my forthcoming article, Legislative Delegations and the Elections Clause, Florida State University Law Review (forthcoming), available on SSRN. Comments, critiques, and feedback are welcome.

As I noted yesterday, a number of practices arose out of early election efforts that caused Congress to consider whether or not the election practice was dictated by the "legislature" of the state, such that Congress should recognize the election as valid. I'll briefly summarize a couple of these congressional debates, which are detailed extensively in the paper.

First, there are numerous instances in which direct democracy dictated election procedures in the first hundred years of the Republic. Constitutional conventions, or votes of the people ratifying statehood, would include provisions for elections. And elections that took place pursuant to those elections would usually be approved by Congress, despite the fact that they had not occurred pursuant to the law promulgated by the "legislature."

But in several other instances, Congress rejected election results on account of them occurring by some means not authorized by the legislature. When the legislature acted to supersede a constitutional provision, Congress would recognize the validity of the election pursuant to the legislature's rule. That is, while it would permit the people of a State to promulgate a rule, that rule was not insulated from subsequent legislative action; later actions of the legislature could promulgate a new rule. (Michael Morley has written much about this "intratextual independent legislature" doctrine.)

In all these cases, Congress is confronted squarely with this issue: did the election occur pursuant to a law from the "legislature" of the State, or not? If so, the election was valid; if not, it was invalid. And we have a rich body of these cases, which the Article chronicles, describing Congress's interpretation of the Elections Clause, and specifically the word "legislature."

Second, and perhaps more tellingly in a case like this, Congress has rejected the notion that the legislature could delegate its power (or have it delegated) to some other agency. In an 1865 dispute, the Senate rejected the election of a Senator elected pursuant to rules promulgated by the New Jersey legislature sitting in a joint meeting, probably (more on this below) among other reasons because the joint meeting was not the "legislature" as the Constitution defined it. And in the case of Sessinghaus v. Frost, the House rejected an election in Missouri because the city of St. Louis instituted a voter registration law--the law had not been enacted by the state Legislature. Indeed, a concurring report expressed concern that no state legislature "could delegate its authority to any other power, to any other body" under the Elections Clause.

Given that the independent redistricting commission in Arizona was a rather express delegation of power from the legislature to a commission, Sessinghaus suggests that the delegation would be unconstitutional. But why no mention of this, or any of the many other cases that Congress has decided in these areas? Congress, after all, sits as judge in election disputes. Perhaps it is because it has happily ceded this decisionmaking authority to the courts; perhaps it is because the final votes from members of Congress are not required to include a list of reasons, which makes the precedent of cases like that from 1865 dubious; perhaps it is because the Court distrusts the decisions of Congress, as it rather overtly suggested in the majority opinion. Regardless, the Court's decision in Arizona State Legislature v. Arizona Independent Redistricting Commission runs largely against the practice of Congress in the decades when it held the primary authority of reviewing elections--and, perhaps, suggests that there are more sources of authority for the interpretation of these cases than the Court may traditionally rely upon.

I found the research interesting, and the conclusions from it still developing. But they should provide for some illumination in future discussions about the validity and scope of legislative delegations under the Elections Clause in future cases--and, perhaps, some discussion about the role of congressional precedent in election disputes in such cases.

Elbridge Gerry and Ruth Bader Ginsburg on our federal system

This is the first of two posts about my forthcoming article, Legislative Delegations and the Elections Clause, Florida State University Law Review (forthcoming), available on SSRN. Comments, critiques, and feedback are welcome.

What were the Founding Fathers thinking about? They were thinking about who had a legislative function. There was no such thing in those days as the initiative or referendum, those developed later, but those are lawmaking functions, so I think it was entirely reasonable to read the Constitution to accommodate whatever means of lawmaking the state had adopted, rather than say, "No, the only way you could make law that counts for this purpose is by the legislature thereof." We can’t know for sure because we have no way of convening with the Founding Fathers, but I think if they knew of the existence of the people’s vote through the initiative or referenda, they would have said, "That’s lawmaking." What we had in mind is who makes the law for the state.

Ruth Bader Ginsburg, conversation at Duke University School of Law, 2015

The evils we experience flow from the excess of democracy. The people do not want virtue, but are the dupes of pretended patriots.

Elbridge Gerry, comments at the federal constitutional convention, 1787

It is hard to overstate how inaccurate Justice Ginsburg's comments are. Men like Mr. Gerry aggressively fought any element of the Constitution that might veer too close to direct democracy. It's the reason so many elements of the Founder's Constitution, from the state legislature's power to elect Senators to the entire Electoral College (direct election, Mr. Gerry remarked, would have been "radically vicious"), often removed elections from the direct control of the people.

Constitutional amendments and state practice have made federal elections more directly democratic, from the direct election of Senators to the common practice of popular election of presidential electors pledged to support a particular candidate. But the Constitution continues to allocate responsible to actors other than "the people."

Justice Ginsburg, of course, was discussing Arizona State Legislature v. Arizona Independent Redistricting Commission, and the Court's opinion (PDF) handed down this summer. The Arizona state legislature challenged the existence of an independent redistricting commission, which had been created by ballot initiative and empowered to draw congressional districts, a task formerly reserved to the state legislature.

The Elections Clause (or the "Times, Places and Manner" Clause) provides, "The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof . . . ." The Arizona state legislature, understandably, thought that "legislature" meant "legislature," not "lawmaking apparatus."

Justice Ginsburg's off-the-cuff remarks were not the heart of the Court's judicial opinion, but some of the comments in the opinion draw quite close:

While attention focused on potential abuses by state-level politicians, and the consequent need for congressional oversight, the legislative processes by which the States could exercise their initiating role in regulating congres­sional elections occasioned no debate. That is hardly surprising. Recall that when the Constitution was com­posed in Philadelphia and later ratified, the people’s legis­lative prerogatives—the initiative and the referendum—were not yet in our democracy’s arsenal.

Apart from a rather dubious argument from silence, direct democracy did exist at the founding, even if the initiative and the referendum did not. Mr. Gerry's remarks certainly highlight that. And the people proposing or ratifying constitutions at conventions were an important and powerful device known at the time of the founding.

And within a generation of the founding, Justice Joseph Story challenged the notion that the people could alter federal election regulations. During Massachuetts's constitutional convention of 1820, Justice Story spoke out, as a citizen, to challenge the notion that the people could amend election provisions in the state constitution. That task, he emphasized, was reserved to the state legislature.

Justice Story's view did not prevail, for the people of Massachusetts did include provisions in their constitution about elections. But Congress has repeatedly been confronted with the question about whether an election was valid because it occurred pursuant to a regulation promulgated by some body other than the legislature of the state. It has a fairly extensive set of cases where it examines the word "legislature," discussions almost (but not wholly) absent from the Court's opinion.

Tomorrow, I'll summarize a few of the highlights from historical discussions, with a brief mention of direct democracy, and a more extensive analysis of delegating the legislature's power to another entity.