Will Goodhart's Law come to USNWR's Hein-based citation metrics?

Economist Charles Goodhart has an old saying attributed to him: “When a measure becomes a target, it ceases to be a good measure.” In the office setting, or in government, or anywhere, really, if you identify a measure as a target, people begin to behave in a way to maximize the value of that measure, and the measure loses its value—because it’s no longer accurately measuring what we hoped it would measure.

The recent announcement from U.S. News and World Report that it would begin to incorporate Hein Online citation data into a rankings formula offers much to discuss. Currently, the “quality” of a law school faculty is roughly measured by a survey of 800 law school faculty, asking them to rank schools on a 1-to-5 scale (which results in likely artificially low and highly compressed rankings).

This isn’t a remarkable proposition. In the first Maclean’s ranking of Canadian law schools in 2007, Professor Brian Leiter helped pioneer a rankings system that included faculty citations. Every few years, a ranking of law school faculty by Professor Greg Sisk, building off Professor Leiter’s method, is released.

The significance, however, is that it is USNWR doing it. The USNWR rankings seem to have outsized influence in affecting law school behavior.

That said, USNWR has announced that, for the moment (i.e., the rankings released in 2019), the ranking will be separate and apart from the existing USNWR ranking. Much like USNWR separately ranks schools by racial diversity, debt loads, or part-time programs, this would be an additional factor. That means it may have much less impact on how schools respond. I imagine prospective applicants will still likely rely primarily on the overall score.

In the future, it may (this is an open question, so let’s not freak out too quickly yet!) be used in the overall rankings. On the whole, it may well be a good change (but I imagine many will disagree even on this hesitant claim!). Rather than the subjective, perhaps too sticky, assessments of faculty voting, this provides an objective (yes, even if imperfect!) metric of faculty productivity and influence. In the event that it used in the overall score in the future, it is one more small component of an overall ranking, it strikes me as appropriate.

There are downsides to any citation metric, of course, so we can begin with that. What to measure, and how, will never be agreed-upon terms. This metric is not exempt from those downsides. To start, USNWR announced that it would use the “previous five years” of citations on Hein. ”This includes such measures as mean citations per faculty member, median citations per faculty member and total number of publications.”

To name a few weakness, then: Schools with disproportionately more junior faculty will be penalized; so, too, will schools with significant interdisciplinary publications or books that may not appear in Hein. (It’s not clear how co-authored pieces, or faculty with appointments at more than one institution would be treated.)

But I’m more concerned with this Goodhart principle: once we start measuring it, what impact might it have on faculty behavior? A few come to mind.

The temptation to recast senior would-be emeritus faculty with sufficient scholarly output as still full-time faculty members, among other ways of trying to recategorize faculty. Perhaps these are marginal changes that all schools will simultaneously engage in, and the results will wash out.

There is a risk of creating inflated or bogus citations. This is a much more realistic risk. Self-citations are ways that scholars might try to overstate their own impact. Men tend to self-cite at disproportionately higher rates than women. Journals that have excessive self-citations are sometimes punished in impact factor rankings. Pressure to place pieces in home institution journals may increase.

Hein currently measures self-citations, which would be a nod in this direction. Some self-citations are assuredly acceptable. But there may be questions about some that are too high. The same might be true if colleagues started to cite one another at unusually high rates, or if they published puff pieces in marginal journals available on Hein with significant citations to themselves and their colleagues.

My hope (ed.: okay, the rosy part begins?) is that law professors will act without regard to the new measure, and that law school administrators seeking to improve their USNWR ranking do not pressure law faculty to change their behavior. And perhaps Hein and USNWR will ensure that its methodology prevents such gaming.

The same holds true for books or interdisciplinary journals that don’t appear on Hein. My hope is that schools continue to value them apart from what value they receive as a component of the rankings. (And it’s worth noting that this scholarship will continue to receive recognition to the extent that the “peer review” voting reflects the output of faculty, output of all types.) (Another aside—perhaps this offers Hein some leverage in seeking to license some interdisciplinary journal content in the years ahead….)

This hope is, perhaps, overly optimistic. But I think the school that starts to exert pressure on changing the kind of scholarship they are doing would receive significant backlash in the legal community. In contrast, the call will probably be greater on those faculty that are currently not producing scholarship or not receiving citations to their work—a different pressure.

It will be much easier for schools to “game” the median citations—finding the single faculty member in the middle, and trying to climb the ladder that way. Median is probably a better metric, in my view (because mean can disproportionately be exaggerated by an outlying faculty member), but it also more likely to be susceptible to Goodhart’s Law. Mean citations would be a tougher thing to move as dramatically or with such precision.

The news from USNWR also indicates it will measure total number of publications published. It’s an output metric in addition to the “influence” metrics of mean and median citations. That could benefit the more junior cohorts of faculties, which tend to produce more at a higher rate as they strive for tenure. (One such illustration is here.)

Finally, this could have an impact on how resources are allocated at institutions. In the past, scholarly output was not a part of the USNWR rankings formula. If it becomes part of the formula in the future, it will become a more valuable asset for institutions to fund and support, and a different way of valuing faculty members.

There are lots of unknowns about this process, and I look forward to seeing the many reactions (sure to come!), in addition to the final formula used and what those rankings look like. And these are all tentative claims that may well overstate or understate certain things—I only observe a few things in a short period of time for now! Time will tell.

"Why not continue the political struggle in partisan-gerrymandering cases?"

I have this piece up at SCOTUSblog entitled, “Symposium: Why not continue the political struggle in partisan-gerrymandering cases?” It begins:

“In a democratic society like ours, relief must come through an aroused popular conscience that sears the conscience of the people’s representatives.” So wrote Justice Felix Frankfurter in his dissenting opinion in Baker v. Carr in 1962.

It was, of course, a dissent. A majority of the Supreme Court in short order reorganized state legislatures according to its own understanding of fair representation — that population should be roughly equal in each legislative district. And the majority’s basis for doing so, Frankfurter’s dissent chided, “ultimately rests on sustained public confidence in [the Court’s] moral sanction.”

The political process is a messy thing. It is laborious to educate the public on a matter and convince them of that matter’s significance. It is time-consuming to wait through election cycles to enact political changes. Impatient litigants demand the federal courts to intervene when the political process moves too slowly.

Law school ruin porn hits USA Today

I actually laughed out loud when I started reading this “yearlong investigation” by four USA Today journalists on the state of legal education. I call the genre, “law school ruin porn.”

“Ruin porn” has long been a genre of photojournalism to display the decay of urban centers or Olympic sites. And I think the genre works for “law school ruins,” or exploiting details about the most marginal law schools and the most at-risk students, then treating them as typical of the profession.

Here’s how the piece opens:

Sam Goldstein graduated from law school in 2013, eager to embark on a legal career.

Five years later, he is still waiting. After eight attempts, Goldstein has not passed the bar exam, a requirement to become a practicing attorney in most states.

"I did not feel I was really prepared at all" to pass the bar, Goldstein  said of his three years in law school. "Even the best of test preps can't really help you unless you've had that solid foundation in law school."

In the meantime, many take lower-paying jobs, as Goldstein did, working  as a law clerk. What he earned didn't put a dent in his $285,000  in student-loan debt, most of which was accrued in law school.   

The piece is reminiscent of a genre of journalism that peaked in 2011 in a series of pieces by David Segal in the New York Times. Here’s how one of them opened:

If there is ever a class in how to remain calm while trapped beneath $250,000 in loans, Michael Wallerstein ought to teach it.

Here  he is, sitting one afternoon at a restaurant on the Upper East Side of Manhattan, a tall, sandy-haired, 27-year-old radiating a kind of surfer-dude serenity. His secret, if that’s the right word, is to pretty  much ignore all the calls and letters that he receives every day from  the dozen or so creditors now hounding him for cash.

“And I don’t open the e-mail alerts with my credit score,” he adds. “I can’t look at my credit score any more.”

Mr.  Wallerstein, who can’t afford to pay down interest and thus watches the outstanding loan balance grow, is in roughly the same financial hell as  people who bought more home than they could afford during the real estate boom. But creditors can’t foreclose on him because he didn’t spend the money on a house.

He spent it on a law degree. And from every angle, this now looks like a catastrophic investment.

Well, every angle except one: the view from law schools.

The fundamental problem with a piece like this one in USA Today is how it treats the outlier as the norm. The vast majority of law students do pass the bar exam on the first attempt. The vast majority of law schools are at no risk of failing to meet the ABA’s standards. But the piece is framed in quite a different fashion.

A student like the one USA Today found is nearly impossible to find. For instance, I blogged earlier about a look at how 2293 first-time test-takers did on the Texas bar exam. Only 10 failed the bar exam even four times. Granted, that includes about another 150 who failed one, two, or three attempts and stopped attempting (at least, stopped attempting in Texas). But it’s nearly impossible to find graduates who have had such poor performance, bad luck, or some combination for such an extended period of time.

USA Today also profiled a graduate of Arizona Summit Law School, the outlier for-profit law school—I’ve blogged about how before 1995, the ABA would never accredit for-profit law schools, until the Department of Justice compelled law schools to do so. (More on Arizona Summit in a bit.)

The ostensible focus of the piece is the ABA’s renewed proposal to require law schools to demonstrate an “ultimate” bar passage rate of 75% within two years of graduation. The result appears dire: “At 18 U.S. law schools, more than a quarter of students did not pass the bar exam within two years,” according to Class of 2015 data.

Of course, George W. Bush would have lost the 2000 presidential election if the National Popular Vote plan were in place. Or, less snarkily, if the rules change, we should expect schools—and perhaps state bars—to change how they behave. If 75% were the cut off, we would expect not just changes in admissions standards, but changes in bar exam cut scores, changes in where students are encouraged to take the bar exam, increased academic dismissal rates, and so on—in short, the 18 from the Class of 2015 doesn’t tell us much.

That said, there are two other reasons the 18 figure doesn’t tell us much. First, and this makes me more “doom and gloom,” it’s too conservative a figure to show the schools that may face a problem in the near future. Any school near an 80% ultimate pass rate, I think, would feel the heat of this proposal—a bad year, a few frustrated students who stop repeating, a weak incoming class, and so on could move a school’s figures a few percentage points and put them in danger. Another 12-15 law schools are within a zone of danger of the new ABA proposal.

Second, the 18 is not nearly as dire as the USA Today piece makes it seem. Two of them are schools in Puerto Rico, which are so different in kind from the rest of the ABA-accredited law schools in the United States that they are essentially two entirely different markets.

At the very end of the piece, it finally conceded something about Arizona Summit: “Arizona Summit Law School in Phoenix, Whittier Law School in Southern California and Valparaiso Law School in northern Indiana are not  accepting new students and will shut once students finish their degrees.” Even without the ABA proposal, 3 of the 18 schools are shutting down—including Arizona Summit, the foil of the opening of the piece. So now the student is not simply an outlier, an 8-time bar test taker from a for-profit school, but from a for-profit school that is no longer in operation. An outlier of an outlier of an outlier—given treatment as something typical. Talk about burying the lede.

And while the data comes from the ABA, I have to wonder whether, because this is the first data disclosure from law schools, some of it is not entirely helpful. (Again, one would think a year-long investigation would clear up these points.) Take Syracuse, listed as an ultimate pass rate of 71%. Its first-time July 2015 bar pass rate was 79%. (Its subsequent July 2016 flew to 89%.) Its combined February & July 2015 pass rates were 86%, along with 75% in New Jersey. (Its California rate for those two tests was 1-for-13.) Now, perhaps it has an unusually high number of individuals failing out of state; or who didn’t take the July 2015 bar the first time and ultimately failed—I have no idea. But it’s the kind of outlier statistic that, to me, merits an inquiry rather than simply a report of figures.

The piece also unhelpfully quotes, without critique, some conclusions from “Law School Transparency.” (You may recall that several years ago LST tried to shake down law schools by charging them at least $2750 a year to “certify” that those schools met LST’s disclosure standards.) For instance, “The number of law schools admitting at least 25% of students considered ‘at risk’ of failing the bar jumped from 30 schools to 74 schools from 2010 to 2014, according to a report in 2015 by Law School Transparency.” Of course, if one cares about ultimate pass rates, which this article purports to care about, then how is it that just 18 schools missed the “ultimate” pass rate compared to LST’s projected 74 (for 2014, but things weren’t exactly better by 2015). In part because LST’s “at risk” is an overly broad definition—because it doesn’t include academic dismissals (despite mentioning it in the report), because it doesn’t account for variances in state bars (despite mentioning it in the report, but not included in identifying “at risk”), because it’s not clear whether LST is primarily concerned with first-time or ultimate passage (the report jumps around), because LST adds a level of risk (which USA Today mistakenly reports) to “at risk” of not graduating in addition to “at risk” of not passing the bar (which, I think, is an entirely valid thing to include), and so on.

A lengthy investigative piece should, in theory, provide greater opportunity for subtlety and fine-tuning points, rather than list a bunch of at-risk schools and serially identify problems with as many of them as possible. That isn’t to say that there aren’t some existential problems at a handful of law schools in the United States, or that the ABA’s proposal isn’t worthy of some serious consideration. It’s simply that this form of journalism is a relic of 2011, and I hope we see the return of more nuanced and complicated analyses to come.

Election Law news of note, week ending January 18, 2019

Here I compile news I find of note (even if others may not find them of note!) regarding Election Law topics each week.

Iowa’s governor has proposed ending permanent felon disenfranchisement in the state. Iowa is one of a handful of states that still does so in our patchwork quilt of state-based voter qualification rules. This comes on the heels of the successful repeal in Florida just last year. The opportunity to give convicted felons a second chance has seen growing bipartisan support on a variety of fronts, including the recent passage of the FIRST STEP Act in Congress. The details of such a proposal remain to be seen—and whether Iowans support it remains another matter to be seen.

Speaking of Iowa, “radical changes” are promised for the 2020 Iowa Democratic caucuses. The likely solutions, a “proxy” caucus and a “tele-caucus,” are sure to increase participation in the event. I’ve wondered how historic structures like “realignment,” a tool that benefited Barack Obama in 2008, might look in a new format, and whether results differ. Assuredly, a change in process will lead to increased uncertainty ahead of the caucuses—perhaps simply building excitement!

A federal court of appeals declined to extend the long-standing consent decree in litigation known as DNC v. RNC. The decree began in litigation that Democrats filed against Republicans in 1981, and it has been extended for years since to prevent Republicans from engaging in certain election-related tactics. But it’s worth remember that back in 2016, a similar effort was raised, and the Supreme Court, without noted dissent, declined to consider the issue. Perhaps that was in part because the litigation arose in literally the days before the election and there was little opportunity to develop the record. But Justice Ruth Bader Ginsburg wrote separately concerning her reason: Ohio law already prohibits voter intimidation. Perhaps, then, extending a 1981-era consent decree is unnecessary, as long as evidence exists that existing state laws are, well, not inadequate to the task. We shall see if future challenges arise concerning this consent decree.

By the way, it’s going to be a busy week for faithless elector litigation! Oral argument is scheduled for cases in Colorado and Washington this upcoming week.

Election Law news of note, week ending January 11, 2019

In an effort to use Twitter less, I’ll try to start compiling news items I find of note (even if others may not find them of note!) regarding Election Law topics each week.

A new bill introduced in New York would prohibit state parties from using “Independent” or “Independence” in their names. The Independence Party sometimes cross-endorses candidates or runs its own candidates for office. But some lawmakers believe this term deceives voters. This is not a unique problem. In California, the obscure American Independence Party has garnered a significant number of registered voters affiliated with the party, likely because of the name. I’ve written about the concept of “Ballot Speech,” or the right of candidates and political parties to express themselves by means of the ballot to voters. I think political parties—especially an established 25-year-old group like the Independence Party—should receive more protection than they currently receive for reasons I lay out in the article. Regardless, the bill struck me as one of note.

The West Virginia House of Delegates has asked the Supreme Court to consider a Guarantee Clause claim arising out of the state’s impeachment proceedings. The House impeached all members of the West Virginia Supreme Court and sent the claims over to the Senate for trial. But “the acting court halted the impeachment process in West Virginia by concluding  that legislators had overstepped their constitutional authority. Acting justices concluded lawmakers had based impeachment on areas the state  Constitution set aside as the responsibility of the judicial branch.” It’s an interesting internal power struggle, and hardly the first time the West Virginia Supreme Court has been the topic of Supreme Court election-related litigation. I think the Court is unlikely to grant the petition, and even less likely to find a Guarantee Clause violation, but the brief was of interest to me.

Democrats took over majority control of the House of Representatives and introduced H.R. 1, a symbolic and sweeping 571-page bill regarding elections in the United States. I won’t spend much time on each piece of the bill because it has effectively no chance to become law. But two provisions struck me as notable. First, the bill basically leaves untouched Shelby County and related portions of the Voting Rights Act. It seems strange to me that, given that Shelby County has been one of the greatest critiques of the Supreme Court by left-leaning politicians in recent years, the only thing the act does is provide, “Congress is committed to  reversing the devastating impact of this decision.” Perhaps an updated Voting Rights Act merits a separate bill. But I found it notable that in 571 pages, Shelby County was almost nowhere to be found. It seemed that not all Democratic constituencies had a hand in crafting the bill.

Second, the bill includes a “Democracy Restoration” provision (Section 1402) that provides that the right to vote in federal elections shall not be abridged or denied on the basis of a criminal conviction, unless those individuals are “serving a felony sentence in a correctional institution or facility at the time of the election.” Setting aside the policy of this provision, I’ve long wondered what constitutional hook would authorize Congress to do so (although there are some plausible if unlikely bases). There’s no express constitutional hook in this bill, but a subsequent provision (Section 1407) was of interest: it prohibits a State from using federal funds “to construct or otherwise improve a prison, jail, or other place of incarceration” unless that State “has in effect a program under which each individual incarcerated in that person’s jurisdiction . . . is notified, upon release from such incarceration, of that individual’s rights under section 1402.” It seems to me that the Spending Clause is a rather difficult hook to expand the franchise. That said, technically, Section 1407 only requires states to “notif[y]” them, not actually enfranchise, so perhaps it isn’t the hook from the Spending Clause—instead, perhaps it’s one of the reason I’ve mused about earlier.

Do specific substantive courses prepare students for those topics on the bar exam? Probably not

Earlier, I blogged about the disconcerting conclusion from recent bar performance and the results of a California State Bar study that law school “bar prep programs” appear to have no impact on students ability to pass the bar exam.

But what about specific substantive course areas? Does a student’s performance in, say, Torts translate into a stronger bar exam score?

The answer? Probably not.

First, let me aclear a little underbrush about what claim I’d like to examine. We all know that students take some subjects that appear on the bar, but most don’t take all of them. Virtually all law school graduates take a specific bar preparation course offered by a for-profit company to help train them for the bar exam.

But law schools might think that they could improve bar passage rates by focusing not simply on “bar prep,” but on the substantive courses that will be tested on the bar exam. If bar passage rates are dropping, then curricular reform that tries to require students to take more Evidence, Torts, or Property might be a perceived soslution.

So what exactly is the relationship between substantive course area performance and the bar exam? Not much.

Back in the 1970s, LSAC commissioned a study looking at law schools in several states and their performance on the bar exam. The then-new Multistate Bar Exam had five subjects. Researchers looked at how law students performed in each of those substantive subject areas in law school: Contracts, Criminal Law, Evidence, Property, Torts. (The results of the study are found at Alfred B. Carlson & Charles E. Werts, Relationships Among Law School Predictors, Law School Performance, and Bar Examination Results, Sep. 1976, LSAC-76-1.)

They then looked at whether The LSAC study examined first-year subject-area grades; first-, second-, third-year grades; and overall law school GPA, and their correlations with MBE subject areas. The higher the number, the closer the relationship.

Torts is an illustrative example. The relationship between the TORT/L (grades in Torts) and the performance of students on the MBE area of Torts is 0.19, a relatively weak correlation. But grades in Torts were more predictive of performance in Real Property, Evidence, Criminal Law, and Contracts—perhaps a counterintuitive finding. That is, your Torts grade told you more about your performance in the Property portion of the bar exam than the Torts section.

Again, these numbers are relatively weak, so one shouldn’t draw much from from that noise, like 0.19 to 0.26.

In contrast, LGPA/L (law school GPA) was more highly correlated than any particular bar exam subject area, and highly correlated (0.55) with the total MBE performance. Recall that overall law school GPA includes a number of courses—bar related and not—and that it’s more predictive than any particular substantive course area.

The LSAC study dug into further findings to conclude that the bar exam is testing “general legal knowledge,” and that performance in any particular subject area is not particularly indicative of strength of performance on that subject area on the bar exam.

The short of it is, this is good evidence that the important thing coming out of three years of law school is not the substantive transmission of knowledge, but the, for lack of a better phrase, ability to “think like a lawyer” (or simply engage in critical legal analysis). Bar prep courses the summer before the bar exam are likely the better place to cram the substantive knowledge for the bar; but the broad base of legal education is what’s being tested (perhaps imperfectly!) on the bar exam.

We also have the results of a recent study by the California State Bar. The study looked at student performance in particular course areas and the relationship with bar exam scores. After examining the results of thousands of students and bar results from 2013, 2016, and 2017, the findings are almost identical.

The correlations between any one subject that that subject on the bar exam are modest, and sometimes they’re (slightly) more highly correlated with different subject areas—the same findings as LSAC’s 1976 study. But none of them are nearly as strong as the overall law school GPA, which is between .6 and .7 over the overall MBE and written components as the study finds. (Unfortunately, this study didn’t break out the relationship between law school GPA and particular MBE topic areas.)

The study did, however, make an interesting finding and reached what I think is an incorrect possible conclusion.

The study discovered that cumulative GPA in California bar exam-related subject areas (listed above) was significantly more highly correlated with the cumulative GPA in non-California bar exam-related subject areas.

It went on to find no relationship (in some smaller sets of data) between bar passage rates and participation in clinical programs; externships; internships; bar preparation courses; and “Non-Bar Related Specialty Course Units” (e.g., Intellectual Property).

Here’s the finding I’d take issue with: “However, overall CBX [California bar exam] performance correlated more strongly statistically with aggregate performance in all of the bar-related courses than with aggregate performance in all non-bar-related courses, suggesting that there may be some type of cumulative effect operating.”

I’m not sure that’s the right assumption to reach. I think that the report understates the likelihood that grade inflation in seminar courses; higher inconsistency in grading in courses taught by adjuncts; or grades in courses that don’t measure the kinds of skills evaluated on the bar exam (e.g., oral advocacy in graded trial advocacy courses) all affect non-bar-related course GPA. That is, my suspicion is that if one were to measure the GPA in other substantively-similar non-bar-related courses (e.g., Federal Courts, Antitrust, Secured Transactions, Administrative Law, Merger & Acquisitions, Intellectual Property, etc.), one would likely find a similar relationship as performance in bar-related course GPA. That’s just a hunch. That’s what I’d love to see future reports examine.

That said, both in 1976 and in 2017, the evidence suggests that performance in a specific substantive course has little to say about how the student will do on the bar—at least, little unique to that course. Students who do well in law school as a whole do well on each particular subject of the bar exam.

When law schools consider how to best help prepare their students for the bar, then, simply channeling students into bar-related subjects is likely ineffective. (And that’s not to say that law schools shouldn’t offer these courses!) Alternative measures should be considered. And I look forward to more substantive course studies like the California study in the future.

Why are law school graduates still failing the bar exam at a high rate?

The first decline took place in the July 2014 bar exam, which some believed might be blamed on an ExamSoft software glitch. Then came continued declines in the July 2015 exam, which some blamed on the addition of Civil Procedure to the Multistate Bar Exam. The declines persisted and even worsened.

Five straight July bar exam cycles with persistent low pass rates across the country. But the bar exam has not become more difficult. Why?

One reason rates remain low is that predictors for incoming classes remain low. LSAT scores actually declined among the most at-risk students between the incoming classes admitted in the 2011-2012 cycle (graduating in 2015) and the 2014-2015 cycle (graduating in 2018). The 25th percentile median LSAT among full-time entrants dropped 2 LSAT points between those who graduated in the Class of 2015 and the Class of 2018. Indeed, 11 schools saw a drop of at least 5 LSAT points in their 25th percentile incoming classes—almost as many as those that saw any improvement whatsoever (just 12 schools, including Yale and Stanford).

Not all LSAT declines are created equal: a drop from 170 to 168 is much more marginal than a drop from 152 to 150; and a drop can have a bigger impact depending on the cut score of the bar exam in each jurisdiction. But it’s no surprise, then, to see the persistently low, and even declining, bar passage rates around the country with this quick aggregate analysis.

Nevertheless, since around September 2014, law schools have been acutely aware of the problem of declining bar passage rates. Perhaps it was too late to course-correct on admissions cycles through at least the Class of 2017.

But what about academic advising? What about providing bar preparation services for at-risk students? Given that law schools have been on notice for nearly five years, why haven’t bar passage rates improved?

I confess, I don’t know what’s happened. But I have a few ideas that I think are worth exploring.

First, it seems increasingly likely that academic dismissal rates, while rising slightly over several years, have not kept pace to account for the significant decline in quality of entering students. Of course, academic dismissals are only one part of the picture, and a controversial topic at that, particularly if tethered to projections about future likelihood to pass the bar exam on the first attempt. I won’t delve into those challenging discussions; I simply note them here.

Another is that law schools haven’t provided those academic advising or bar preparation services to students—but that seems unlikely.

Still another, and perhaps much more alarming, concern is that those bar services have been ineffective (or not as effective as one might hope). And this is a moment of reckoning for law schools.

Assuredly, when the first downturns of scores came, law schools felt they had to do something, anything, to right the ship. That meant taking steps that would calm the fears of law students and appease universities. Creating or expanding bar preparation courses, or hiring individuals dedicated with bar preparation, would be easy solutions—law students could participate in direct and tangible courses that were specifically designed to help them achieve bar exam success; law faculty could feel relieved that steps were being taken to help students; university administrators could feel confident that something was being done. Whether these bolstered existing courses or added to them, assuredly schools provided opportunities to their students.

But… to what end? Something was done at many institutions. Has it been effective?

Apparently not. The lagging (and falling) bar passage rates are a sign of that. Granted, perhaps the slide would be worse without such courses, but that seems like cold comfort to schools that have been trying to affirmatively improve rates.

We now have the first evidence to that effect. A report commissioned by the California State Bar recently studied several California law schools that disclosed student-specific data on a wide range of fronts—not just LSAT and UGPA in relation to their bar exam score, but law school GPA, courses taken, even participation in externships and clinic.

One variable to consider was involvement in a bar preparation course. Did participation in a bar preparation course help students pass the bar? I excerpt the unsettling finding here:

Five law schools provided data for this variable. Students averaged about 1.5 units (range 0 to 6). For all those students, there was a -.20 (p<.0001) correlation between the number of units taken and CBX TOTSCL [California Bar Exam Total Scale Scores]. The source of this negative relationship appears to be the fact that in five out of six [sic] of the schools, it was students with lower GPAs who took these classes. After controlling for GPA, the number of bar preparation course units a student takes had no relationship to their performance on the CBX. A follow up analysis, examining just the students in the lower half of GPA distribution, showed that there was no statistically significant difference in CBX TOTSCL for those who took a bar preparation course versus those who did not (p=.24). Analyses conducted within each of the five schools yielded similar findings.

This should be a red flag for law schools seeking to provide bar preparation services to their students. In this student, whatever law schools are doing to help their students pass the bar has no discernible impact on students’ actual bar exam scores.

Granted, these are just five California law schools and the California bar. And there has been other school-specific programs at some institutions that may provide a better model.

But it’s worth law schools considering whether students are on a path toward improving bar passage success or simply on a hamster wheel of doing more work without any discernible positive impact. More studies and evidence are of course in order. But the results from the last several years, confirmed by the study of five California law schools, suggests that revisiting the existing the model is of some urgency.

Annual Statement, 2018

Site disclosures

Total operating cost: $192

Total content acquisition costs: $0

Total site visits: 74,081* (-9.7% over 2017)

Total unique visitors: 62,638 68,435 (-8.5% over 2017)

Total pageviews: 101,049 (-11% over 2017)

Top referrers:
Twitter (2045)
Reddit (1312)
Facebook (584)
ABA Journal (470)
Blogarama (216)
Top-Law-Schools (164)
Election Law Blog (115)
SCOTUSBlog (92)

Most popular content (by pageviews):
Ranking the most liberal and conservative law firms (July 16, 2013) (19,051)
Visualizing the 2018 U.S. News law school rankings--the way they should be presented (Mar. 14, 2017) (6218)
Politifact fact-check: the Ninth Circuit is, in fact, the most reversed federal court of appeals (Feb. 20, 2017) (3760)
February 2017 MBE bar scores collapse to all-time record low in test history (Apr. 7, 2017) (3251)
Where are they now? Supreme Court clerks, OT 2007 (Sept. 22, 2017) (2627)
The best prospective law students read Homer (Apr. 7, 2014) (2449)

I have omitted "most popular search results" (99% of search results not disclosed by search engine, very few common searches in 2017).

Sponsored content: none

Revenue generated: none

Platform: Squarespace

Privacy disclosures

External trackers: one (Google Analytics)

Individuals with internal access to site at any time in 2017: one (Derek Muller)

*Over the course of a year, various spam bots from sites like Semalt, Adfly, Snip.to, and others may begin to visit the site at a high rate. As they did so, I added them to a referral exclusion list, but their initial visits are not disaggregated from the overall totals. These sites are also excluded from the top referrers list. Additionally, all visits from my own computers are excluded.