Anatomy of a botched USNWR law ranking leak

For the past few years, USNWR has emailed all law schools deans an embargoed PDF listing the tentative law school rankings about a week before their formal release. And for the past few years, within minutes (and in disregard of that embargo), that email is leaked to a private consulting company, which then posts the rankings on its corporate blog, where the rankings then spread via social media and gossip sites.

This year, USWNR did something different. It released most of its graduate school rankings in an Excel spreadsheet on a password-protected site around 8 am ET on Tuesday, March 5. But it did not release the law school full time rankings, nor the business school rankings. (I guess we know which schools are beholden to these rankings and where USWNR sees its value!) 

Instead, shortly after, individuals at schools received their own school's ranking, and nothing more. This makes leaking much more challenging. If you leak your own school's ranking, it's obvious you leaked it, and USNWR may punish you by not giving you access to that embargoed data early next year. 

But around 5 pm ET on Tuesday, March 5, USNWR sent out a new update. Its Academic Insights database would now have the 2020 rankings data (that is, the rankings data to be released March 12, 2019). 

Academic Insights is a USNWR platform that law schools purchase a license to access and use. It has rankings data stretching back to the beginning. It offers multiple ways to view the data inside AI, or to pull the rankings data out of the database. 

It's user friendly, but it isn't always the easiest to operate, and like many web databases it can suffer from some wonky behavior. It makes leaking a trickier proposition.

Around 7 pm ET March 5, the private consulting company posted the rankings. But the rankings made it very obvious that there were errors, and it also provided clues about how those errors came about.

To leak this information to someone, some law school administrator made a data request from the database and exported the rankings information to a CSV file. The “Leaderboard” AI database is a swift way to see the ranking of law schools compared to one another across categories. (Recall that the database stretches back through the history of USNWR, so it includes all schools that were ever ranked over the last 30 years, whether or not they’re ranked, or even exist, this year.)

The list then included as “N/A” (i.e., “unranked” this year) schools like Arizona Summit and the University of Puerto Rico. This is unsurprising because USNWR doesn’t rank (1) provisionally-accredited schools, (2) schools under probation, and (3) the schools in Puerto Rico.

But the leaked ranking included other bizarre “unranked” choices: Hamline University; Pennsylvania State University (Dickinson) pre-2017; Rutgers, The State University of New Jersey--Camden; Rutgers, The State University of New Jersey--Newark; Widener University; and William Mitchell College of Law (among others). These schools all no longer exist (along with a couple of others that have announced closures). Why list them as “unranked”?

Separately, the leaked rankings omitted information for Penn State - University Park, Penn State - Dickinson, Rutgers, Widener University (Commonwealth), Widener University Delaware, and Mitchell|Hamline. Why aren’t these schools in the database?

These are obviously not random database omissions. They're omissions of schools that split or merged. Their old schools are in the database. But the Leaderboard database pull request omitted those schools. (Why, I don't know.)

But there are ways of requesting school-specific data. You could request the specific institutional data in the AI database for, say, Penn State - University Park or Rutgers, and the data is now available for your review—including those institutions’ ranks. Of course, a few schools might ultimately be "rank not published," or "Tier 2" schools in the rankings. But they're not "unranked."

(Incidentally, from the revealed metadata, we know a lot of information about which person at which law school leaked the rankings, but that’s not what this blog post is about.)

The real botching came when the leaked ranking included these strange inclusions and omissions (with some noticeable gaps—think two schools listed at 64, followed by a school ranked at 67, which means there’s an omission in the 64 ranking) was posted and began to spread. Panicked students and prospective students at places like Penn State and Rutgers asked what happened. The private consulting company replied that it “appeared” the schools were “unranked.” That spawned a great deal of speculation and worry on behalf of these students.

Of course, that wasn’t the case. The statements speculating that these schools appeared to be “unranked” were reckless—that is, they were based without an understanding of how the database operates and based instead on speculation—and false—because, as I noted, each of these omitted schools had a ranking in the database, simply not in the CSV leaked to this private consulting company. (Later statements began to concede that these schools would probably be ranked, but those statements came only after worry and misinformation had spread.)

I pushed back against this false news last week in a couple of social media outlets, because it does no good to perpetuate false rumors about these law schools. These law schools, I insisted, would probably be ranked. They were ranked at the very moment in the AI database; and, barring a change, they’ll be ranked when the rankings were released (i.e., now). (Of course, some schools, like those under probation or those in Puerto Rico, were never going to be ranked.) 

The backlash I received on social media was impressive. I confess, I'm not sure why so many prospective law students felt threatened by my insistence that someone had disclosed bad information about schools like Penn State and Rutgers to them! (Happily, such comments roll off easily.) After that, apparently, USNWR asked for those rankings to be taken down, and they were. (Of course, they still floated around social media and gossip sites.)

But we know that leaking USNWR information from the AI database presents complications for future leaks. Failure to understand how to operate the database may leave an incomplete and inaccurate picture, as occurred this year with the botched leak. We’ll see what USNWR does for the 2021 rankings—are total but accurate leaks better, or incomplete but inaccurate leaks better? We shall see.

And for those relying on leaks in the future? Read skeptically. The leak included material errors this year, and I wouldn't be surprised to see material errors in future leaks.

JD enrollment for 2019 on pace for another modest increase but slight quality decline

For the last year, LSAC has offered really useful tools for tracking the current status of JD applicants and LSAT test-takers. Data is updated daily and visualized cleanly.

We’re at a stage where we’d expect just about 80% of the applicant pool to be filled out, so it should be useful to see where things stand.

Applicants are up 3.1% year-over-year. Applicant increases don’t perfectly equate to increases in matriculants—last year, we saw an 8% applicant spike but a more modest 2.6% matriculant increase. But up is better than down, and we may see another modest increase in JD enrollment in 2019. We’ve seen modest increases the last several years—better for law schools than flat enrollment, but a continued reflection of a “new normal” in legal education.

A more disappointing development is that the quality of applicants has declined. Applicants with a 165 or higher LSAT score are down a few points. The bulk of increase in applicants comes from those scoring a 155-159. But then again, those with <140 LSAT scores are also in decline.

It remains to be seen how many would-be high-LSAT performers opted for the GRE in lieu of the LSAT, which may affect how LSAT scores are distributed among applicants. But it’s another reason to think that any increase in JD enrollment in 2019 will be lower than the increase in the size of the applicant pool—at least, if law schools seek to preserve the quality of their classes.

Statehood, the District of Columbia, and the Twenty-Third Amendment

There’s a renewed effort for statehood for the District of Columbia in the new Democratic-controlled House, and H.R. 51 is the proposal to do so. (DC’s non-voting representative, Eleanor Holmes Norton, has snagged bill #51 as a symbolic gesture in the past, too.)

I’d recently wondered about whether electors of the District of Columbia could cast votes for presidential and vice presidential candidates who resided in the District. As you may know, the Twelfth Amendment requires that electors cast two votes, one for president and one for vice president, “one of whom, at least, shall not be an inhabitant of the same state with themselves .”

DC isn’t a state, but the Twenty-Third Amendment gave DC presidential electors. So, could DC electors vote for an all-DC ticket? No, due to a clever phrase in the Amendment: “they shall be in addition to those appointed by the states, but they shall be considered, for the purposes of the election of President and Vice President, to be electors appointed by a state.” (Emphasis added.) In other words, whatever parts of the Constitution refer to “state” in the context of presidential electors? Those apply to DC’s electors, too.

But back to H.R. 51. The bill, like many bills, excises several blocks from the District when creating a new state. Those blocks are still the seat of government of the United States, and not a part of the state. So a bill like H.R. 51 would create a new state out of the old District, but it would basically split the District into two: new-state-District, and seat-of-government-district. The seat-of-government-district being quite small, essentially residuals made up of several federal buildings and the like.

Of course, the Twenty-Third Amendment comes back into play: “The District constituting the seat of government of the United States shall appoint in such manner as the Congress may direct.” So if DC becomes a state, it gets two Senators, at least one Representative, and at least three presidential electors. But the residual district—remember, just a few federal buildings carved out—is constitutionally entitled to presidential electors. No more than a few dozen people may live in this new seat of government—after all, the residences of DC have been put into the new state.

A little Googling revealed this point has been raised before. That is, before DC can become a state, it ought to be conditioned on a repeal of the Twenty-Third Amendment, lest an anomalous residual set of a few federal buildings is entitled to a slate of presidential electors. That hasn’t been a point of emphasis, of course—getting popular support for statehood for DC is a high hurdle, and the first one that must be surmounted—but it struck me as notable.

Gaming out Hein citation metrics in a USNWR rankings system

Much has been said about the impending USNWR proposal to have a Hein-based citation ranking of law school faculties. I had some of my own thoughts here. But I wanted to focus on one of the most popular critiques, then move to gaming out how citations might look in a rankings system.

Many have noted how Hein might undercount individual faculty citations, either accidentally (e.g., misspellings of names) or intentionally (e.g., exclusion of many peer-reviewed journals from their database), along with intentional exclusion of certain faculty (e.g., excluding a very highly-cited legal research & writing faculty member who is not tenured or tenure-track).

The individual concerns are of concern, but not certainly for the reasons that they would tend to make the USNWR citation metrics less accurate. There are two reasons to be worried—non-random biases and law school administrative reactions—which I’ll get to in a moment.

Suppose USNWR said it was going to do a ranking of all faculty by mean and median height of tenured and tenure-track faculty. “But wait,” you might protest, “my faculty just hired a terrific 5’0” scholar!” or, “we have this terrific 6’4” clinician who isn’t tenure-track!” All true. But, doesn’t every faculty have those problems? If so, then the methodology doesn’t have a particular weakness in measuring all the law schools as a whole against one another. Emphasis on weaknesses related to individuals inside or outside the rankings ought to wash out across schools.

Ah, you say, but I have a different concern—that is, let’s say, our school has a high percentage of female faculty (much higher than the typical law school), and women tend to be shorter than men, so this ranking does skew against us. This is a real bias we should be concerned about.

So let’s focus on this first problem. Suppose your schools has a cohort of clinical or legal research & writing faculty on the tenure track, and they have lesser (if any) writing obligations; many schools do not have such faculty on the tenure track. Now we can identifying a problem in some schools suffering by the methodology of these rankings.

Another—suppose your schools had disproportionate numbers of faculty whose work appears in books or peer-reviewed journals that don’t appear in Hein. There might be good reasons for this. Or, there might not be. That’s another problem. But, again, just because one faculty member has a lot of peer-reviewed publications not in Hein doesn’t mean it’s a bad metric across schools. (Believe me, I feel it! My most-cited piece is a peer-reviewed piece, and I have several placements in peer-reviewed journals.)

Importantly, my colleague Professor Rob Anderson notes one virtue of the Sisk rankings that are not currently present in Hein citation counts: “The key here is ensuring that Hein and US News take into account citations TO interdisciplinary work FROM law reviews, not just citations TO law reviews FROM law reviews as it appears they might do. That would be too narrow. Sisk currently captures these interdisciplinary citations FROM law reviews, and it is important for Hein to do the same. The same applies to books.”

We simply don’t know (yet) whether these institutional biases exist or how they’ll play out. But I have a few preliminary graphics on this front.

It’s not clear how Hein will measure things. Sisk-Leiter measures things using a formula of 2*mean citations plus median citations. The USNWR metric may use mean citations plus median citations, plus publications. Who knows at this rate. (We’ll know in a few weeks!)

In the future, if (big if!) USNWR chooses to incorporate Hein citations into the rankings, it would likely do so by diminishing the value of the peer review score, which currently sits at a hefty 25% of the ranking and has been extraordinarily sticky. So it may be valuable to consider how citations relate to the peer score and what material differences we might observe.

Understanding that Sisk-Leiter is an approximation for Hein at this point, we can show the relationship between the top 70 or so schools in the Sisk-Leiter score (and a few schools we have estimates for at the bottom of the range), and the relationship of those schools to their peer scores.

This is a remarkably incomplete portrait for a few reasons, not the lease of which the trendline would change once we add 130 schools with scores lower than about 210 to the matrix. But very roughly we can see that the trends roughly correlate between peer score and Sisk-Leiter score, with a few outliers—those outperforming peer score via Sisk-Leiter above the line, those underperforming below the line.

But this is also an incomplete portrait for another reason. USNWR scales standardizes each score, which means they place the scores in relationship with one another before totalling them. That’s how they can add a figure like $80,000 of direct expenditures per student with a incoming class median LSAT score of 165. Done this way, we can see just how much impact changes (either natural improvement or attempts to manipulate the rankings) can have. This is emphatically the most important way to think about the change. Law school deans that see that citations are a part of the rankings and reorient themselves accordingly may well be chasing after the wind if costly school-specific changes have, at best, a marginal relationship to improving one’s overall USNWR score.

UPDATE: A careful and helpful reader pointed out that USNWR standardizes each score, but "rescales" only at the end. So the analysis below is simplified to take the standardized z-scores and rescaling them myself. This still allows us to make relative comparisons to each other, but it isn't the most precise way of thinking about the numerical impact at the end of the day. It makes it more readable--but less precise. Forgive me for my oversimplification and conflation!

Let’s take a look at how scaling currently works with the USNWR peer scores.

I took the peer review scores from the rankings released in March 2018 and scaled them on a 0-100 scale—the top score (4.8) became 100, and the bottom score (1.1) became 0.

As you can see, the scaling spreads out the schools a bit at the top, and it starts to compress them fairly significantly as we move down the list. I did a visualization of the distribution not long ago, but here you can see that an improvement in your peer score of 0.1 can nudge you up, but it won’t make up a lot of ground. That said, the schools are pretty well spread apart, and if you grinded it out, you could make some headway by climbing if you improved your peer score by 0.5 points—a nearly impossible feat. Coupled with the fact that this factor is a whopping 25% of the rankings, it offers opportunity if someone could figure out how to move. (Most schools just don’t move. The ones that do, tend to do so because of name changes.)

Now let’s compare that to a scaling of the Sisk-Leiter scores. I had to estimate the bottom 120 or so scores, distributing them down to a low end Sisk-Leiter score of 75. There’s a lot of guesswork, so this is extremely rough. (The schools around 70 or so have a Sisk-Leiter score of around 210.)

Yale’s 1474 becomes 100; the 75 I created for the bottom becomes 0. Note what happens here. Yale’s extraordinarily high citation count stretches out the field. That means lots of schools are crammed together near the bottom—consider Michigan (35), Northwestern (34), and Virginia (32). Two-thirds of schools are down in the bottom 10% of the scoring.

If you’re looking to gain ground in the USNWR rankings, and assuming the Hein citation rankings look anything like the Sisk-Leiter citation rankings, “gaming” the citation rankings is a terrible way of doing it. A score of about 215 puts you around 11; a score of around 350 puts you at 20. That’s the kind of dramatic movement of a 0.5 peer score improvement.

But let’s look at that 215 again. That’s about a 70 median citation count. Sisk-Leiter is over three years. So on a faculty of about 30, that’s about 700 faculty-wide citations per year. To get to 350, you’d need about a 90 median citation count, or increase to around 900 faculty-wide citations per year. Schools won’t be making marked improvements with the kinds of “gaming” strategies I outlined in an earlier post. They may do so through structural changes—hiring well-cited laterals and the like. But I am skeptical that any modest changes or gaming would have any impact.

There will undoubtedly be some modest advantages for schools that dramatically outperform their peer scores, and some modest injury for a few schools that dramatically underperform. But for most schools, the effect will be marginal at best.

That’s not do say that some schools will react inappropriately or with the wrong incentives to a new structure in the event that Hein citations are ultimately incorporated in the rankings.

But one more perspective. Let’s plug these Sisk-Leiter models into a USNWR model. Let’s suppose instead of peer scores being 25% of the rankings, peer scores become just 15% of the rankings and “citation rankings” become 10% of the rankings.

UPDATE: I have modified some of the figures in light of choosing to use the z-scores instead of adding the scaled components to each other.

When we do this, just about every school loses points relative to the peer score model—recall in the chart above, a lot of schools are bunched up in the 60-100 band of peer scores, but almost none are in the 60-100 band for Sisk-Leiter. Yale pushes all the schools downward in the citation rankings.

So, in a model of 15% peer score/10% citation rankings among the top 70 or so Sisk-Leiter schools, the typical school drops about 13 scaled points. That’s not important, however, for most of them—recall, they’re being compared to one another, so if most drop 17 points, then most should remain unchanged. And again, dropping 17 points is only 25% of the rankings—it’s really about a 4-point change in the overall rankings.

I then looked at the model to see which schools dropped 24 or more scaled points in this band (i.e., a material drop in score given that this only accounts for about 25% of the ranking): Boston College, Georgetown, Iowa, Michigan, Northwestern, Texas, Virginia, and Wisconsin. (More schools would likely fit this model once we had all 200 schools’ citation rankings. The University of Washington, for instance, would also probably be in this set.) But recall that for some of these schools—like Michigan and Northwestern—are already much higher than many other schools, so even dropping like this here would have little impact on the ordinal rankings of law school.

I then looked at the model to see which schools dropped 10 points or fewer (or gained!) (i.e., a material improvement in score): Chicago, Drexel, Florida International, George Mason, Harvard, Hofstra, Irvine, San Francisco, St. Thomas (MN), Toledo, and Yale. Recall again, Yale cannot improve beyond #1, and Harvard and Chicago are, again, so high in the rankings that marginal relative improvements in this area are likely not going to affect the ordinal rankings.

And all this means is that for the vast majority of schools, we’ll see little change—perhaps some randomness in rounding or year-to-year variations, but I don’t project for most schools much change at all.

Someone with more sophistication than I could then try to game out how these fit into the overall rankings. But that’s enough gaming for now. We’ll wait to see how the USNWR Hein citation figures come out this year, then we might play with the figures to see how they might affect the rankings.

(Note: to emphasize once again, I just use Sisk-Leiter. Hein will include, among other things, different citations, it may weigh differently thank Sisk-Leiter, it uses a different window, it may use different faculty, and the USNWR citation rankings may well include publications in addition to citations.)

Will Goodhart's Law come to USNWR's Hein-based citation metrics?

Economist Charles Goodhart has an old saying attributed to him: “When a measure becomes a target, it ceases to be a good measure.” In the office setting, or in government, or anywhere, really, if you identify a measure as a target, people begin to behave in a way to maximize the value of that measure, and the measure loses its value—because it’s no longer accurately measuring what we hoped it would measure.

The recent announcement from U.S. News and World Report that it would begin to incorporate Hein Online citation data into a rankings formula offers much to discuss. Currently, the “quality” of a law school faculty is roughly measured by a survey of 800 law school faculty, asking them to rank schools on a 1-to-5 scale (which results in likely artificially low and highly compressed rankings).

This isn’t a remarkable proposition. In the first Maclean’s ranking of Canadian law schools in 2007, Professor Brian Leiter helped pioneer a rankings system that included faculty citations. Every few years, a ranking of law school faculty by Professor Greg Sisk, building off Professor Leiter’s method, is released.

The significance, however, is that it is USNWR doing it. The USNWR rankings seem to have outsized influence in affecting law school behavior.

That said, USNWR has announced that, for the moment (i.e., the rankings released in 2019), the ranking will be separate and apart from the existing USNWR ranking. Much like USNWR separately ranks schools by racial diversity, debt loads, or part-time programs, this would be an additional factor. That means it may have much less impact on how schools respond. I imagine prospective applicants will still likely rely primarily on the overall score.

In the future, it may (this is an open question, so let’s not freak out too quickly yet!) be used in the overall rankings. On the whole, it may well be a good change (but I imagine many will disagree even on this hesitant claim!). Rather than the subjective, perhaps too sticky, assessments of faculty voting, this provides an objective (yes, even if imperfect!) metric of faculty productivity and influence. In the event that it used in the overall score in the future, it is one more small component of an overall ranking, it strikes me as appropriate.

There are downsides to any citation metric, of course, so we can begin with that. What to measure, and how, will never be agreed-upon terms. This metric is not exempt from those downsides. To start, USNWR announced that it would use the “previous five years” of citations on Hein. ”This includes such measures as mean citations per faculty member, median citations per faculty member and total number of publications.”

To name a few weakness, then: Schools with disproportionately more junior faculty will be penalized; so, too, will schools with significant interdisciplinary publications or books that may not appear in Hein. (It’s not clear how co-authored pieces, or faculty with appointments at more than one institution would be treated.)

But I’m more concerned with this Goodhart principle: once we start measuring it, what impact might it have on faculty behavior? A few come to mind.

The temptation to recast senior would-be emeritus faculty with sufficient scholarly output as still full-time faculty members, among other ways of trying to recategorize faculty. Perhaps these are marginal changes that all schools will simultaneously engage in, and the results will wash out.

There is a risk of creating inflated or bogus citations. This is a much more realistic risk. Self-citations are ways that scholars might try to overstate their own impact. Men tend to self-cite at disproportionately higher rates than women. Journals that have excessive self-citations are sometimes punished in impact factor rankings. Pressure to place pieces in home institution journals may increase.

Hein currently measures self-citations, which would be a nod in this direction. Some self-citations are assuredly acceptable. But there may be questions about some that are too high. The same might be true if colleagues started to cite one another at unusually high rates, or if they published puff pieces in marginal journals available on Hein with significant citations to themselves and their colleagues.

My hope (ed.: okay, the rosy part begins?) is that law professors will act without regard to the new measure, and that law school administrators seeking to improve their USNWR ranking do not pressure law faculty to change their behavior. And perhaps Hein and USNWR will ensure that its methodology prevents such gaming.

The same holds true for books or interdisciplinary journals that don’t appear on Hein. My hope is that schools continue to value them apart from what value they receive as a component of the rankings. (And it’s worth noting that this scholarship will continue to receive recognition to the extent that the “peer review” voting reflects the output of faculty, output of all types.) (Another aside—perhaps this offers Hein some leverage in seeking to license some interdisciplinary journal content in the years ahead….)

This hope is, perhaps, overly optimistic. But I think the school that starts to exert pressure on changing the kind of scholarship they are doing would receive significant backlash in the legal community. In contrast, the call will probably be greater on those faculty that are currently not producing scholarship or not receiving citations to their work—a different pressure.

It will be much easier for schools to “game” the median citations—finding the single faculty member in the middle, and trying to climb the ladder that way. Median is probably a better metric, in my view (because mean can disproportionately be exaggerated by an outlying faculty member), but it also more likely to be susceptible to Goodhart’s Law. Mean citations would be a tougher thing to move as dramatically or with such precision.

The news from USNWR also indicates it will measure total number of publications published. It’s an output metric in addition to the “influence” metrics of mean and median citations. That could benefit the more junior cohorts of faculties, which tend to produce more at a higher rate as they strive for tenure. (One such illustration is here.)

Finally, this could have an impact on how resources are allocated at institutions. In the past, scholarly output was not a part of the USNWR rankings formula. If it becomes part of the formula in the future, it will become a more valuable asset for institutions to fund and support, and a different way of valuing faculty members.

There are lots of unknowns about this process, and I look forward to seeing the many reactions (sure to come!), in addition to the final formula used and what those rankings look like. And these are all tentative claims that may well overstate or understate certain things—I only observe a few things in a short period of time for now! Time will tell.

"Why not continue the political struggle in partisan-gerrymandering cases?"

I have this piece up at SCOTUSblog entitled, “Symposium: Why not continue the political struggle in partisan-gerrymandering cases?” It begins:

“In a democratic society like ours, relief must come through an aroused popular conscience that sears the conscience of the people’s representatives.” So wrote Justice Felix Frankfurter in his dissenting opinion in Baker v. Carr in 1962.

It was, of course, a dissent. A majority of the Supreme Court in short order reorganized state legislatures according to its own understanding of fair representation — that population should be roughly equal in each legislative district. And the majority’s basis for doing so, Frankfurter’s dissent chided, “ultimately rests on sustained public confidence in [the Court’s] moral sanction.”

The political process is a messy thing. It is laborious to educate the public on a matter and convince them of that matter’s significance. It is time-consuming to wait through election cycles to enact political changes. Impatient litigants demand the federal courts to intervene when the political process moves too slowly.

Law school ruin porn hits USA Today

I actually laughed out loud when I started reading this “yearlong investigation” by four USA Today journalists on the state of legal education. I call the genre, “law school ruin porn.”

“Ruin porn” has long been a genre of photojournalism to display the decay of urban centers or Olympic sites. And I think the genre works for “law school ruins,” or exploiting details about the most marginal law schools and the most at-risk students, then treating them as typical of the profession.

Here’s how the piece opens:

Sam Goldstein graduated from law school in 2013, eager to embark on a legal career.

Five years later, he is still waiting. After eight attempts, Goldstein has not passed the bar exam, a requirement to become a practicing attorney in most states.

"I did not feel I was really prepared at all" to pass the bar, Goldstein  said of his three years in law school. "Even the best of test preps can't really help you unless you've had that solid foundation in law school."

In the meantime, many take lower-paying jobs, as Goldstein did, working  as a law clerk. What he earned didn't put a dent in his $285,000  in student-loan debt, most of which was accrued in law school.   

The piece is reminiscent of a genre of journalism that peaked in 2011 in a series of pieces by David Segal in the New York Times. Here’s how one of them opened:

If there is ever a class in how to remain calm while trapped beneath $250,000 in loans, Michael Wallerstein ought to teach it.

Here  he is, sitting one afternoon at a restaurant on the Upper East Side of Manhattan, a tall, sandy-haired, 27-year-old radiating a kind of surfer-dude serenity. His secret, if that’s the right word, is to pretty  much ignore all the calls and letters that he receives every day from  the dozen or so creditors now hounding him for cash.

“And I don’t open the e-mail alerts with my credit score,” he adds. “I can’t look at my credit score any more.”

Mr.  Wallerstein, who can’t afford to pay down interest and thus watches the outstanding loan balance grow, is in roughly the same financial hell as  people who bought more home than they could afford during the real estate boom. But creditors can’t foreclose on him because he didn’t spend the money on a house.

He spent it on a law degree. And from every angle, this now looks like a catastrophic investment.

Well, every angle except one: the view from law schools.

The fundamental problem with a piece like this one in USA Today is how it treats the outlier as the norm. The vast majority of law students do pass the bar exam on the first attempt. The vast majority of law schools are at no risk of failing to meet the ABA’s standards. But the piece is framed in quite a different fashion.

A student like the one USA Today found is nearly impossible to find. For instance, I blogged earlier about a look at how 2293 first-time test-takers did on the Texas bar exam. Only 10 failed the bar exam even four times. Granted, that includes about another 150 who failed one, two, or three attempts and stopped attempting (at least, stopped attempting in Texas). But it’s nearly impossible to find graduates who have had such poor performance, bad luck, or some combination for such an extended period of time.

USA Today also profiled a graduate of Arizona Summit Law School, the outlier for-profit law school—I’ve blogged about how before 1995, the ABA would never accredit for-profit law schools, until the Department of Justice compelled law schools to do so. (More on Arizona Summit in a bit.)

The ostensible focus of the piece is the ABA’s renewed proposal to require law schools to demonstrate an “ultimate” bar passage rate of 75% within two years of graduation. The result appears dire: “At 18 U.S. law schools, more than a quarter of students did not pass the bar exam within two years,” according to Class of 2015 data.

Of course, George W. Bush would have lost the 2000 presidential election if the National Popular Vote plan were in place. Or, less snarkily, if the rules change, we should expect schools—and perhaps state bars—to change how they behave. If 75% were the cut off, we would expect not just changes in admissions standards, but changes in bar exam cut scores, changes in where students are encouraged to take the bar exam, increased academic dismissal rates, and so on—in short, the 18 from the Class of 2015 doesn’t tell us much.

That said, there are two other reasons the 18 figure doesn’t tell us much. First, and this makes me more “doom and gloom,” it’s too conservative a figure to show the schools that may face a problem in the near future. Any school near an 80% ultimate pass rate, I think, would feel the heat of this proposal—a bad year, a few frustrated students who stop repeating, a weak incoming class, and so on could move a school’s figures a few percentage points and put them in danger. Another 12-15 law schools are within a zone of danger of the new ABA proposal.

Second, the 18 is not nearly as dire as the USA Today piece makes it seem. Two of them are schools in Puerto Rico, which are so different in kind from the rest of the ABA-accredited law schools in the United States that they are essentially two entirely different markets.

At the very end of the piece, it finally conceded something about Arizona Summit: “Arizona Summit Law School in Phoenix, Whittier Law School in Southern California and Valparaiso Law School in northern Indiana are not  accepting new students and will shut once students finish their degrees.” Even without the ABA proposal, 3 of the 18 schools are shutting down—including Arizona Summit, the foil of the opening of the piece. So now the student is not simply an outlier, an 8-time bar test taker from a for-profit school, but from a for-profit school that is no longer in operation. An outlier of an outlier of an outlier—given treatment as something typical. Talk about burying the lede.

And while the data comes from the ABA, I have to wonder whether, because this is the first data disclosure from law schools, some of it is not entirely helpful. (Again, one would think a year-long investigation would clear up these points.) Take Syracuse, listed as an ultimate pass rate of 71%. Its first-time July 2015 bar pass rate was 79%. (Its subsequent July 2016 flew to 89%.) Its combined February & July 2015 pass rates were 86%, along with 75% in New Jersey. (Its California rate for those two tests was 1-for-13.) Now, perhaps it has an unusually high number of individuals failing out of state; or who didn’t take the July 2015 bar the first time and ultimately failed—I have no idea. But it’s the kind of outlier statistic that, to me, merits an inquiry rather than simply a report of figures. (UPDATE: Syracuse has indicated that the figures were, in fact, inaccurate, and that their ultimate bar passage rate was 82.6%.)

The piece also unhelpfully quotes, without critique, some conclusions from “Law School Transparency.” (You may recall that several years ago LST tried to shake down law schools by charging them at least $2750 a year to “certify” that those schools met LST’s disclosure standards.) For instance, “The number of law schools admitting at least 25% of students considered ‘at risk’ of failing the bar jumped from 30 schools to 74 schools from 2010 to 2014, according to a report in 2015 by Law School Transparency.” Of course, if one cares about ultimate pass rates, which this article purports to care about, then how is it that just 18 schools missed the “ultimate” pass rate compared to LST’s projected 74 (for 2014, but things weren’t exactly better by 2015). In part because LST’s “at risk” is an overly broad definition—because it doesn’t include academic dismissals (despite mentioning it in the report), because it doesn’t account for variances in state bars (despite mentioning it in the report, but not included in identifying “at risk”), because it’s not clear whether LST is primarily concerned with first-time or ultimate passage (the report jumps around), because LST adds a level of risk (which USA Today mistakenly reports) to “at risk” of not graduating in addition to “at risk” of not passing the bar (which, I think, is an entirely valid thing to include), and so on.

A lengthy investigative piece should, in theory, provide greater opportunity for subtlety and fine-tuning points, rather than list a bunch of at-risk schools and serially identify problems with as many of them as possible. That isn’t to say that there aren’t some existential problems at a handful of law schools in the United States, or that the ABA’s proposal isn’t worthy of some serious consideration. It’s simply that this form of journalism is a relic of 2011, and I hope we see the return of more nuanced and complicated analyses to come.

Election Law news of note, week ending January 18, 2019

Here I compile news I find of note (even if others may not find them of note!) regarding Election Law topics each week.

Iowa’s governor has proposed ending permanent felon disenfranchisement in the state. Iowa is one of a handful of states that still does so in our patchwork quilt of state-based voter qualification rules. This comes on the heels of the successful repeal in Florida just last year. The opportunity to give convicted felons a second chance has seen growing bipartisan support on a variety of fronts, including the recent passage of the FIRST STEP Act in Congress. The details of such a proposal remain to be seen—and whether Iowans support it remains another matter to be seen.

Speaking of Iowa, “radical changes” are promised for the 2020 Iowa Democratic caucuses. The likely solutions, a “proxy” caucus and a “tele-caucus,” are sure to increase participation in the event. I’ve wondered how historic structures like “realignment,” a tool that benefited Barack Obama in 2008, might look in a new format, and whether results differ. Assuredly, a change in process will lead to increased uncertainty ahead of the caucuses—perhaps simply building excitement!

A federal court of appeals declined to extend the long-standing consent decree in litigation known as DNC v. RNC. The decree began in litigation that Democrats filed against Republicans in 1981, and it has been extended for years since to prevent Republicans from engaging in certain election-related tactics. But it’s worth remember that back in 2016, a similar effort was raised, and the Supreme Court, without noted dissent, declined to consider the issue. Perhaps that was in part because the litigation arose in literally the days before the election and there was little opportunity to develop the record. But Justice Ruth Bader Ginsburg wrote separately concerning her reason: Ohio law already prohibits voter intimidation. Perhaps, then, extending a 1981-era consent decree is unnecessary, as long as evidence exists that existing state laws are, well, not inadequate to the task. We shall see if future challenges arise concerning this consent decree.

By the way, it’s going to be a busy week for faithless elector litigation! Oral argument is scheduled for cases in Colorado and Washington this upcoming week.

Election Law news of note, week ending January 11, 2019

In an effort to use Twitter less, I’ll try to start compiling news items I find of note (even if others may not find them of note!) regarding Election Law topics each week.

A new bill introduced in New York would prohibit state parties from using “Independent” or “Independence” in their names. The Independence Party sometimes cross-endorses candidates or runs its own candidates for office. But some lawmakers believe this term deceives voters. This is not a unique problem. In California, the obscure American Independence Party has garnered a significant number of registered voters affiliated with the party, likely because of the name. I’ve written about the concept of “Ballot Speech,” or the right of candidates and political parties to express themselves by means of the ballot to voters. I think political parties—especially an established 25-year-old group like the Independence Party—should receive more protection than they currently receive for reasons I lay out in the article. Regardless, the bill struck me as one of note.

The West Virginia House of Delegates has asked the Supreme Court to consider a Guarantee Clause claim arising out of the state’s impeachment proceedings. The House impeached all members of the West Virginia Supreme Court and sent the claims over to the Senate for trial. But “the acting court halted the impeachment process in West Virginia by concluding  that legislators had overstepped their constitutional authority. Acting justices concluded lawmakers had based impeachment on areas the state  Constitution set aside as the responsibility of the judicial branch.” It’s an interesting internal power struggle, and hardly the first time the West Virginia Supreme Court has been the topic of Supreme Court election-related litigation. I think the Court is unlikely to grant the petition, and even less likely to find a Guarantee Clause violation, but the brief was of interest to me.

Democrats took over majority control of the House of Representatives and introduced H.R. 1, a symbolic and sweeping 571-page bill regarding elections in the United States. I won’t spend much time on each piece of the bill because it has effectively no chance to become law. But two provisions struck me as notable. First, the bill basically leaves untouched Shelby County and related portions of the Voting Rights Act. It seems strange to me that, given that Shelby County has been one of the greatest critiques of the Supreme Court by left-leaning politicians in recent years, the only thing the act does is provide, “Congress is committed to  reversing the devastating impact of this decision.” Perhaps an updated Voting Rights Act merits a separate bill. But I found it notable that in 571 pages, Shelby County was almost nowhere to be found. It seemed that not all Democratic constituencies had a hand in crafting the bill.

Second, the bill includes a “Democracy Restoration” provision (Section 1402) that provides that the right to vote in federal elections shall not be abridged or denied on the basis of a criminal conviction, unless those individuals are “serving a felony sentence in a correctional institution or facility at the time of the election.” Setting aside the policy of this provision, I’ve long wondered what constitutional hook would authorize Congress to do so (although there are some plausible if unlikely bases). There’s no express constitutional hook in this bill, but a subsequent provision (Section 1407) was of interest: it prohibits a State from using federal funds “to construct or otherwise improve a prison, jail, or other place of incarceration” unless that State “has in effect a program under which each individual incarcerated in that person’s jurisdiction . . . is notified, upon release from such incarceration, of that individual’s rights under section 1402.” It seems to me that the Spending Clause is a rather difficult hook to expand the franchise. That said, technically, Section 1407 only requires states to “notif[y]” them, not actually enfranchise, so perhaps it isn’t the hook from the Spending Clause—instead, perhaps it’s one of the reason I’ve mused about earlier.