Malibu's unusual Measure R: an initiative to create more initiatives

You may not be very familiar with Malibu, the home of Pepperdine, except for some occasional references about Johnny Carson or Barbie dolls. It's a sleepy, rural beach town that runs twenty-seven miles up the Pacific Coast, far from the din of Los Angeles and a good drive from the hub of Santa Monica.

There are a couple of factions in Malibu. One faction wants to preserve the local character of the sleepy, rural beach town--little development, few chain stores, small businesses. Another faction wants the town to look more like a developing community, with more commercial opportunities. (For a story on the recent debate in Malibu, see this Los Angeles Times story.)

Measure R is a ballot initiative that will be on the ballot in Malibu. (PDF) It contains what might be considered typical for land use restrictions: it restricts commercial development to 20,000 square feet; it limits chain stores to 30% of the real estate in new shopping centers; and so on.

But it's the remedy that I find fascinating as a part of Measure R:

A specific plan or plans shall be prepared for every development project subject to this measure. Following adoption of the specific plan or plans for these projects by the City Council, the plan or plans shall be placed on the ballot, as soon as possible, for approval by the voters. One specific plan may be prepared covering more than one development project subject to this measure or a separate specific plan may be prepared for each subject project.

That's right--the ballot initiative empowers voters, by future initiatives, to decide whether to approve or disapprove any such commercial development.

It's a deep commitment to direct democracy, or a deep distrust of the City Council, depending on your views. And if Measure R passes, it's not just the ballot initiative that's enacted, but a likelihood that future ballot measures will give the people of Malibu the power to decide whether to go ahead with development of other commercial development projects.

National Conference of Bar Examiners: Class of 2014 "was less able" than Class of 2013

Continuing a series about the decline in bar passage rates, the National Conference of Bar Examiners recently wrote a letter to law school deans that explained its theory behind the reason in a 10-year low in Multistate Bar Exam scores and the single-biggest drop in MBE history. I've excerpted the relevant paragraphs below.

In the wake of the release of MBE scores from the July 2014 test administration, I also want to take this opportunity to let you know that the drop in scores that we saw this past July has been a matter of concern to us, as no doubt it has been to many of you. While we always take quality control of MBE scoring very seriously, we redoubled our efforts to satisfy ourselves that no error occurred in scoring the examination or in equating the test with its predecessors. The results are correct.
Beyond checking and rechecking our equating, we have looked at other indicators to challenge the results. All point to the fact that the group that sat in July 2014 was less able than the group that sat in July 2013. In July 2013 we marked the highest number of MBE test-takers. This year the number of MBE test-takers fell by five percent. This was not unanticipated: figures from the American Bar Association indicate that first-year enrollment fell 7% between Fall 2010 (the 2013 graduating class) and Fall 2011 (the 2014 class). We have been expecting a dip in bar examination numbers as declining law school applications and enrollments worked their way to the law school graduation stage, but the question of performance of the 2014 graduates was of course unknown.
Some have questioned whether adoption of the Uniform Bar Examination has been a factor in slumping pass rates. It has not. In most UBE jurisdictions (there are currently 14), the same test components are being used and the components are being combined as they were before the UBE was adopted. As noted above, it is the MBE, with scores equated across time, that reveals a decline in performance of the cohort that took July 2014 bar examinations.
In closing, I can assure you that had we discovered an error in MBE scoring, we would have acknowledged it and corrected it.

Well, that doesn't make any sense.

First, whether a class is "less able" is a matter of LSAT and UGPA scores. It is not a matter of the size of the class.

Second, to the extent a brief window into the LSAT scores for the entering classes in the Fall of 2010 and 2011 are useful metrics, Jerry Organ has noted elsewhere that the dip in scores was fairly modest in that first year after the peak application cycle. It certainly gets dramatically worse, but nothing suggesting that admissions in that one-year window fell off a cliff. (More data on the class quality is, I hope, forthcoming.)

Third, to the extent that the size of the class matters, it does not adequately explain the drop-off. Below is a chart of the mean scaled MBE scores, and an overlay of the entering 1L class size (shifted three years, so that the 1L entering class corresponds with the expected year of graduation).

If there's supposed to be a drop-off in scores because of a drop-off in enrollment (and there is, indeed, some meaningful correlations between the two), it doesn't explain the severity of the drop in this case.

This explanation, then, isn't really an explanation, save an ipse dixit statement that NCBE has "redoubled" its efforts and an assurance that "[t]he results are correct." An explanation is still yet to be found.

Did ExamSoft cause the bar passage rate decline?

I’ve blogged about the sharp decline in the MBE scores and the corresponding drop in bar passage rates in a number of jurisdictions around the United States. I’m still struggling to find an explanation.

One theory is that the ExamSoft fiasco affected the MBE scores. Most states have two days of exams: a day of essays followed by a day of multiple choice in the MBE. The software most states use for the essay response portion had problems in July 2014--test-takers were unable to upload bar results in a timely fashion. As a result, students slept less and stressed more the night before the MBE, which may have yielded lower scores on the MBE.

We can test this in one small way: several states do not use ExamSoft. Arizona, Kentucky, Maine, Nebraska, Virginia, and Wisconsin all use Exam4 software; the District of Columbia does not permit the use of computers. If ExamSoft yielded lower scores, then we might expect bar passage rights to remain unaffected in places that didn’t use it.

But it doesn’t appear that the non-ExamSoft jurisdictions did any better. Here are the disclosed changes in bar passage rates of July 2013 in jurisdictions that did not use ExamSoft:

Arizona (-7 points)

District of Columbia (-8 points)

Kentucky (unchanged)

Virginia (-7 points)

These states have already disclosed their statewide passage rates, and they do not appear to be materially better than the other scores around the country.

It might still be a factor in the jurisdictions that use ExamSoft in conjunction with other variables. But it doesn’t appear to be the single, magic explanation for the decline. There are likely other, yet-unexplained variables out there.

 (I’m grateful to Jerry Organ for his comments on this theory.)

Bar exam posts single-largest drop in scores in history

The Mulitstate Bar Exam, a series of multiple choice questions administered over several jurisdictions, has existed since 1972. The NCBE has statistics disclosing the history of scaled MBE scores since 1976.

After tracking the decline in bar scores across jurisdictions this year, I noted that the MBE had reached a 10-year low in scores. It turns out that's only part of the story.

The 2.8-point drop in scores is the single largest drop in the history of the MBE.

The largest other drop was in 1984, which saw a 2.3-point drop over the July 1983 test. The biggest increase was 1994, which saw a 2.4-point increase over the July 1993 test. And the only other fluctuation exceeding two points was the 1989 2.2-point increase over the July 1988 test.

What might be behind this change? I've speculated about a few things earlier; I'll address some theories in later posts this week.

Virgin Islands Supreme Court ignores federal court on election dispute

I blogged earlier about the extraordinary dispute in the United States Virgin Islands, in which the Virgin Islands Supreme Court ordered a sitting senator off the ballot because it concluded she had committed a crime involving moral turpitude that rendered her disqualified for office. In response, the governor pardoned her, and an ensuing case in federal court resulted in an order to get her back on the ballot.

I thought that would end the matter.

It didn't.

The case has become even more surreal.

In a recent decision (PDF or decisions page), the Virgin Islands Supreme Court has decided to ignore the federal court order, concluding the federal court lacked jurisdiction to hear the case; and, further, has ordered Senator Alicia "Chucky" Hansen's name off the ballot, even though ballots have been printed, absentee ballots have been sent out, and early voting is underway.

The opinion is meandering, to say the least. It includes citations to the Rooker-Feldman doctrine, the Supremacy Clause's purported distinction between Article III and Article IV courts, exercises of supplemental jurisdiction, and in personam and in rem proceedings.

There's too much to unpack here, but I'll note three brief points.

First, it notes that Senator Hansen has the ability to petition as a write-in candidate. In U.S. Term Limits v. Thornton, the Supreme Court concluded that a bar on a candidate's name appearing on the ballot was overly burdensome when the only alternative was a write-in candidacy. That, the Court found, was effectively a bar and could not cure the congressional term limits rule that left a candidate's name on the ballot. Here, too, I think the court misses the mark by arguing that a write-in candidacy is a viable alternative.

Second, it rejects not just Purcell v. Gonzalez, but also the four Supreme Court decisions handed down in the last few weeks involving litigation in North Carolina, Ohio, Texas, and Wisconsin. In each, the Court restored the "status quo" prior to an upcoming election--in three cases, allowing a contested law to remain in effect, and in one case, continuing an injunction against a law that had been challenged. Here, the court attempts to distinguish theses on a lack of a record suggesting that there's a problem in altering the ballots--this, despite the fact that early voting is actually underway in the Virgin Islands.

Third, this is the first opportunity for a case to be appealed directly to the United States Supreme Court since a recently jurisdictional law took effect; previously, cases would be appealed from the Virgin Islands Supreme Court to the Third Circuit.

We'll see if anything comes from this case. But it might serve as a fifth instance of the Supreme Court stepping in this election season and addressing the preservation of the status quo.

Total LSAT takers in steady decline

Last year I blogged about the fact that for legal education, the worst is yet to come--there continued to be fewer LSAT takers and fewer law school applicants. I charted the decline in cumulative LSATs administered last October. But I noted that there seemed to be an evening out by the end of the cycle and updated the chart to reflect that.

No longer. LSAC has now reported a 9.1% decrease year-over-year in LSAT test-takers in the June 2014 test, and a 8.1% decline in the October 2014 test. That's a cumulative total of 52,745 LSATs administered, down from 57,670 over last year, and down from 93,341 in 2009-2010--that's more than a 40% decline in LSATs administered.

Here's the updated chart showing cumulative LSATs administered.

It appears that for legal education, the worst still may be yet to come.

Bar exam scores dip to their lowest level in 10 years

Earlier, I noted that there had been a drop in bar passage rates in a handful of jurisdictions. (Follow that post to track state-by-state changes in the pass rates as the statistics come in.) A commenter theorized:

It's quite simple actually: the NCBE did a poor job of normalizing the MBE this year. The median MBE score is down a couple of points, and because states scale their essays to match the MBE results in their state, it also means median essay scores have decreased a small amount. Combine the two scores and you are seeing (in states using a 50/50 system), a 4-5 point drop in scores.

It's actually quite damning to the NCBE, because bar passage rates should be up and median MBEs also up if the historical correlation between LSAT and bar passage is taken into account.

Tennessee recently disclosed at the national mean scaled MBE score for July 2014 was 141.47. That's the lowest mean scaled MBE score for July since 2004, when the mean scaled MBE score was 141.2 (PDF). It's also almost three points lower than the July 2013 score.

There are innocuous reasons why the score dropped. It might be that there were a disproportionately high number of repeated test-takers. It might be that an increase in non-American law degree test-takers yielded a drop. Or there might be other reasons, too.

But for whatever reasons, the decline in MBE scores is almost assuredly the reason that bar passage rates have dropped in a number of jurisdictions. Whether similar declines are going to arise in places like New York and California in the weeks ahead is simply a matter of waiting.

One in a thousand: Judge Reinhardt and Ninth Circuit odds

Josh Blackman recently noted that Judge Stephen Reinhardt has the "uncanny ability to be on the right panels," and asked what the odds are that he could serve on the three recent panels, one regarding California's marriage amendment, another on striking jurors on the basis of sexual orientation, and a third on other Ninth Circuit marriage cases.

The odds are about 1 in 1000.

That's slightly deceptive--it's not unique to these three cases. It's simply because the odds of being on any three random panels are 1 in 1000 in the Ninth Circuit.

But here's now the math works.

There are 29 active judges in the Ninth Circuit. The odds of being on any given panel are 1/29 + 1/28 + 1/27, or 10.72% 1/29 + 1/28(28/29) + 1/27(27/29), or 10.34%.

But there are also 16 senior judges, and one of them may sit on a panel with two active judges. In those cases, the odds are 1/45 + 1/29 + 1/28, or 9.24% 1/29 + 1/28(28/29), or 0.69%.

So assuming the odds of selecting the first judge are completely random, there is a 29/45 chance that the first set of odds applies, and a 16/45 chance that the second set of odds applies for an active judge. That means we have 10.72 * (29/45) + 9.24 * (16/45), or 10.2% 10.32 * (29/45) + 6.89 * (16/45), or 9.10%, that an active judge will be selected for a given panel.

If we're looking at three panels, we take those odds and raise them to the third power. That leaves the odds of serving on any given three panels as 0.106% 0.075%, or something a little over 1 in 1000.

There are a number of caveats. Not all judges are "on" at the same time, and some pick different months to be available for panels, so the number at any given time is often fewer than 29. Senior judges have a lower caseload, and their odds of being picked may or may not be as certain a figure. Sometimes visiting judges, such as district court judges or senior judges from other circuits, may sit on the panel. Not all of the vacancies were filled on the Ninth Circuit in the last couple of years, and the number of senior judges fluctuated slightly in that period, too.

Yes, there's deeply limited data. But that's the high-level math behind any given Ninth Circuit panel selection.

Update: Roger Ford tweets, for just Ninth Circuit active judges, "It’s 3/29 or (1/29)+(1/28)*(28/29)+(1/27)(27/29)." That's more accurate--it factors in the odds as to whether a given judge has been selected for the first or second slots on the three-judge panel. And that's about 10.3%, slightly lower odds than my original math, but still comes out to just about 1 in 1000.

Update II: I've attempted to fix the math... but this is the peril when it's been a decade since my last math class. Thanks for all the patience with my rough efforts.