Gaming out Hein citation metrics in a USNWR rankings system

Much has been said about the impending USNWR proposal to have a Hein-based citation ranking of law school faculties. I had some of my own thoughts here. But I wanted to focus on one of the most popular critiques, then move to gaming out how citations might look in a rankings system.

Many have noted how Hein might undercount individual faculty citations, either accidentally (e.g., misspellings of names) or intentionally (e.g., exclusion of many peer-reviewed journals from their database), along with intentional exclusion of certain faculty (e.g., excluding a very highly-cited legal research & writing faculty member who is not tenured or tenure-track).

The individual concerns are of concern, but not certainly for the reasons that they would tend to make the USNWR citation metrics less accurate. There are two reasons to be worried—non-random biases and law school administrative reactions—which I’ll get to in a moment.

Suppose USNWR said it was going to do a ranking of all faculty by mean and median height of tenured and tenure-track faculty. “But wait,” you might protest, “my faculty just hired a terrific 5’0” scholar!” or, “we have this terrific 6’4” clinician who isn’t tenure-track!” All true. But, doesn’t every faculty have those problems? If so, then the methodology doesn’t have a particular weakness in measuring all the law schools as a whole against one another. Emphasis on weaknesses related to individuals inside or outside the rankings ought to wash out across schools.

Ah, you say, but I have a different concern—that is, let’s say, our school has a high percentage of female faculty (much higher than the typical law school), and women tend to be shorter than men, so this ranking does skew against us. This is a real bias we should be concerned about.

So let’s focus on this first problem. Suppose your schools has a cohort of clinical or legal research & writing faculty on the tenure track, and they have lesser (if any) writing obligations; many schools do not have such faculty on the tenure track. Now we can identifying a problem in some schools suffering by the methodology of these rankings.

Another—suppose your schools had disproportionate numbers of faculty whose work appears in books or peer-reviewed journals that don’t appear in Hein. There might be good reasons for this. Or, there might not be. That’s another problem. But, again, just because one faculty member has a lot of peer-reviewed publications not in Hein doesn’t mean it’s a bad metric across schools. (Believe me, I feel it! My most-cited piece is a peer-reviewed piece, and I have several placements in peer-reviewed journals.)

Importantly, my colleague Professor Rob Anderson notes one virtue of the Sisk rankings that are not currently present in Hein citation counts: “The key here is ensuring that Hein and US News take into account citations TO interdisciplinary work FROM law reviews, not just citations TO law reviews FROM law reviews as it appears they might do. That would be too narrow. Sisk currently captures these interdisciplinary citations FROM law reviews, and it is important for Hein to do the same. The same applies to books.”

We simply don’t know (yet) whether these institutional biases exist or how they’ll play out. But I have a few preliminary graphics on this front.

It’s not clear how Hein will measure things. Sisk-Leiter measures things using a formula of 2*mean citations plus median citations. The USNWR metric may use mean citations plus median citations, plus publications. Who knows at this rate. (We’ll know in a few weeks!)

In the future, if (big if!) USNWR chooses to incorporate Hein citations into the rankings, it would likely do so by diminishing the value of the peer review score, which currently sits at a hefty 25% of the ranking and has been extraordinarily sticky. So it may be valuable to consider how citations relate to the peer score and what material differences we might observe.

Understanding that Sisk-Leiter is an approximation for Hein at this point, we can show the relationship between the top 70 or so schools in the Sisk-Leiter score (and a few schools we have estimates for at the bottom of the range), and the relationship of those schools to their peer scores.

This is a remarkably incomplete portrait for a few reasons, not the lease of which the trendline would change once we add 130 schools with scores lower than about 210 to the matrix. But very roughly we can see that the trends roughly correlate between peer score and Sisk-Leiter score, with a few outliers—those outperforming peer score via Sisk-Leiter above the line, those underperforming below the line.

But this is also an incomplete portrait for another reason. USNWR scales standardizes each score, which means they place the scores in relationship with one another before totalling them. That’s how they can add a figure like $80,000 of direct expenditures per student with a incoming class median LSAT score of 165. Done this way, we can see just how much impact changes (either natural improvement or attempts to manipulate the rankings) can have. This is emphatically the most important way to think about the change. Law school deans that see that citations are a part of the rankings and reorient themselves accordingly may well be chasing after the wind if costly school-specific changes have, at best, a marginal relationship to improving one’s overall USNWR score.

UPDATE: A careful and helpful reader pointed out that USNWR standardizes each score, but "rescales" only at the end. So the analysis below is simplified to take the standardized z-scores and rescaling them myself. This still allows us to make relative comparisons to each other, but it isn't the most precise way of thinking about the numerical impact at the end of the day. It makes it more readable--but less precise. Forgive me for my oversimplification and conflation!

Let’s take a look at how scaling currently works with the USNWR peer scores.

I took the peer review scores from the rankings released in March 2018 and scaled them on a 0-100 scale—the top score (4.8) became 100, and the bottom score (1.1) became 0.

As you can see, the scaling spreads out the schools a bit at the top, and it starts to compress them fairly significantly as we move down the list. I did a visualization of the distribution not long ago, but here you can see that an improvement in your peer score of 0.1 can nudge you up, but it won’t make up a lot of ground. That said, the schools are pretty well spread apart, and if you grinded it out, you could make some headway by climbing if you improved your peer score by 0.5 points—a nearly impossible feat. Coupled with the fact that this factor is a whopping 25% of the rankings, it offers opportunity if someone could figure out how to move. (Most schools just don’t move. The ones that do, tend to do so because of name changes.)

Now let’s compare that to a scaling of the Sisk-Leiter scores. I had to estimate the bottom 120 or so scores, distributing them down to a low end Sisk-Leiter score of 75. There’s a lot of guesswork, so this is extremely rough. (The schools around 70 or so have a Sisk-Leiter score of around 210.)

Yale’s 1474 becomes 100; the 75 I created for the bottom becomes 0. Note what happens here. Yale’s extraordinarily high citation count stretches out the field. That means lots of schools are crammed together near the bottom—consider Michigan (35), Northwestern (34), and Virginia (32). Two-thirds of schools are down in the bottom 10% of the scoring.

If you’re looking to gain ground in the USNWR rankings, and assuming the Hein citation rankings look anything like the Sisk-Leiter citation rankings, “gaming” the citation rankings is a terrible way of doing it. A score of about 215 puts you around 11; a score of around 350 puts you at 20. That’s the kind of dramatic movement of a 0.5 peer score improvement.

But let’s look at that 215 again. That’s about a 70 median citation count. Sisk-Leiter is over three years. So on a faculty of about 30, that’s about 700 faculty-wide citations per year. To get to 350, you’d need about a 90 median citation count, or increase to around 900 faculty-wide citations per year. Schools won’t be making marked improvements with the kinds of “gaming” strategies I outlined in an earlier post. They may do so through structural changes—hiring well-cited laterals and the like. But I am skeptical that any modest changes or gaming would have any impact.

There will undoubtedly be some modest advantages for schools that dramatically outperform their peer scores, and some modest injury for a few schools that dramatically underperform. But for most schools, the effect will be marginal at best.

That’s not do say that some schools will react inappropriately or with the wrong incentives to a new structure in the event that Hein citations are ultimately incorporated in the rankings.

But one more perspective. Let’s plug these Sisk-Leiter models into a USNWR model. Let’s suppose instead of peer scores being 25% of the rankings, peer scores become just 15% of the rankings and “citation rankings” become 10% of the rankings.

UPDATE: I have modified some of the figures in light of choosing to use the z-scores instead of adding the scaled components to each other.

When we do this, just about every school loses points relative to the peer score model—recall in the chart above, a lot of schools are bunched up in the 60-100 band of peer scores, but almost none are in the 60-100 band for Sisk-Leiter. Yale pushes all the schools downward in the citation rankings.

So, in a model of 15% peer score/10% citation rankings among the top 70 or so Sisk-Leiter schools, the typical school drops about 13 scaled points. That’s not important, however, for most of them—recall, they’re being compared to one another, so if most drop 17 points, then most should remain unchanged. And again, dropping 17 points is only 25% of the rankings—it’s really about a 4-point change in the overall rankings.

I then looked at the model to see which schools dropped 24 or more scaled points in this band (i.e., a material drop in score given that this only accounts for about 25% of the ranking): Boston College, Georgetown, Iowa, Michigan, Northwestern, Texas, Virginia, and Wisconsin. (More schools would likely fit this model once we had all 200 schools’ citation rankings. The University of Washington, for instance, would also probably be in this set.) But recall that for some of these schools—like Michigan and Northwestern—are already much higher than many other schools, so even dropping like this here would have little impact on the ordinal rankings of law school.

I then looked at the model to see which schools dropped 10 points or fewer (or gained!) (i.e., a material improvement in score): Chicago, Drexel, Florida International, George Mason, Harvard, Hofstra, Irvine, San Francisco, St. Thomas (MN), Toledo, and Yale. Recall again, Yale cannot improve beyond #1, and Harvard and Chicago are, again, so high in the rankings that marginal relative improvements in this area are likely not going to affect the ordinal rankings.

And all this means is that for the vast majority of schools, we’ll see little change—perhaps some randomness in rounding or year-to-year variations, but I don’t project for most schools much change at all.

Someone with more sophistication than I could then try to game out how these fit into the overall rankings. But that’s enough gaming for now. We’ll wait to see how the USNWR Hein citation figures come out this year, then we might play with the figures to see how they might affect the rankings.

(Note: to emphasize once again, I just use Sisk-Leiter. Hein will include, among other things, different citations, it may weigh differently thank Sisk-Leiter, it uses a different window, it may use different faculty, and the USNWR citation rankings may well include publications in addition to citations.)

MBE scores drop to 34-year low as bar pass rates decline again

On the heels of some good news in recent administrations of the July bar exam comes tough news from the National Conference of Bar Examiners: the Multistate Bar Exam (MBE) scores have dropped to a 34-year low, their lowest point since 1984.

For perspective, California's "cut score" is 144, Virginia 140, Texas 135, New York 133. A bar score of 139.5 is comparable to 2015 (139.9) in recent years. One would have to go back to the 80s to see comparable scores: 1982 (139.7), 1984 (139.2), & 1988 (139.8).

I’d hoped that perhaps qualifications of students have rebounded a bit as schools improved their incoming classes a few years ago; perhaps students are putting more effort into the bar than previous years; or other factors. That appears to not be the case this year.

That said, MBE scores may be slightly less predictive of what will happen with actual bar pass rates. the NCBE has pointed out that the rise of the Uniform Bar Exam has led to a number of test-takers transferring scores to new jurisdictions rather than taking a second jurisdiction’s bar—and, presumably, those who pass in one jurisdiction are much more likely to pass in another jurisdiction (accepting that cut scores can vary in some jurisdictions). The UBE points to a few thousand such transfers last year, at least some of whom may have taken the bar exam. But put against more than 40,000 MBE test-takers, the effect, while real, may be small.

Instead, we’re left to watch as results come in state by state. Tracking first-time pass rates (from jurisdictions that share them so far—ideally, ABA graduates would be a better measure, but this works reasonably well for now), declines have been pretty consistent: New Mexico (-14 points), Indiana (-3), North Carolina (+1), Oklahoma (-8), Missouri (-7), Iowa (-3), Washington (-3), and Florida (-4). But in many of these jurisdiction, pass rates were worse in, say, 2015 or 2016.

We’ll know more in the months to come, but it looks like another year of decline will cause some continued anguish in legal education. The increased quality of law school applicants this year will help the July 2021 bar exam look much better.

Note: I chose a non-zero Y-axis to show relative performance.

Federal Judicial Clerkship Report of Recent Law School Gradates, 2018 Edition

I've regularly posted judicial clerkship statistics on this blog. This year, I offer something slightly different: "Federal Judicial Clerkship Report of Recent Law School Gradates, 2018 Edition," a report I've posted on SSRN.

This Report offers an analysis of the overall hiring of recent law school graduates into federal judicial clerkships between 2015-2017 for each law school. It includes an overall hiring report, regional reports, overall hiring trends, an elite hiring report, and trends concerning judicial vacancies.

A preview of overall placement:

There's also been a decline in total law school federal clerkship placement, likely attributable in part to the rise in federal judicial vacancies:

For these and more, check out the Report!

Visualizing legal employment outcomes in California in 2017

This is the eighth and last in a series of visualizations on legal employment outcomes for the Class of 2017. Following posts on outcomes in Florida, Pennsylvania, Texas, New York, Illinois, Ohio, and DC-Maryland-Virginia, here is a visualization for legal employment outcomes of graduates of California law schools for the Class of 2017. (More about the methodology is available at the Florida post.) Last year's California post is here.

While most markets remained fairly stagnant, California saw a marked rise year-over-year. total graduates dropped to 3910, a slight decline from 4081 in 2016 but a big decline from the 4403 in 2015 and 4731 in 2014. But the overall unfunded placement rate soared from 64.3% to 69.9%. That came from an increase in in bar passage-required jobs, from 2206 to 2397, as J.D.-advantage placement dropped.

Law school-funded positions also tapered off, from 118 positions (2.9% of graduates) last year to 82 (2.1%) this year. (Please recall from the methodology that the bar chart is sorted by full-weight positions, which excludes school-funded positions, while the table below that is sorted by total employment as USNWR prints, which includes school-funded positions.)

As always, please notify me of any corrections or errata.

Peer Score School 2017 YoY% BPR JDA LSF 2016 BPR JDA LSF
4.4 University of California-Berkeley 94.8% 0.2 269 6 14 94.5% 278 11 23
4.8 Stanford University 93.9% -0.1 163 15 7 94.0% 164 4 4
3.9 University of California-Los Angeles 92.5% 1.6 283 17 31 90.8% 239 18 30
3.5 University of Southern California 90.0% 4.5 179 4 5 85.5% 140 9 22
3.3 University of California-Irvine 86.5% 0.9 72 4 7 85.6% 84 3 14
3.4 University of California-Davis 84.4% 3.3 119 10 12 81.2% 87 11 14
2.6 Loyola Law School-Los Angeles 79.6% 6.0 204 31 3 73.6% 221 36 5
2.6 Pepperdine University 75.8% 10.0 134 34 1 65.7% 98 19 2
1.9 Chapman University 68.2% 7.5 81 20 0 60.8% 78 18 0
2.6 University of San Diego 67.6% 9.8 121 17 0 57.8% 102 24 0
3.0 University of California-Hastings 67.5% 0.5 166 22 1 67.0% 154 46 1
1.6 California Western School of Law 64.5% 1.4 106 21 0 63.1% 82 29 0
2.4 Santa Clara University 64.0% 2.6 77 10 0 61.4% 102 30 0
2.0 University of San Francisco 60.8% 13.6 75 18 0 47.1% 46 20 0
1.9 McGeorge School of Law 59.1% 2.3 62 16 0 56.8% 56 23 0
1.8 Southwestern Law School 58.6% 4.1 124 43 0 54.5% 125 48 2
1.1 Western State College of Law 52.1% 7.0 32 6 0 45.1% 29 12 0
1.5 Golden Gate University 51.7% 10.7 33 11 1 41.1% 30 15 1
nr Whittier Law School 39.6% 0.5 44 15 0 39.1% 38 12 0
1.2 University of La Verne 36.8% 5.5 12 2 0 31.4% 7 9 0
nr Thomas Jefferson School of Law 32.2% 0.3 41 15 0 31.9% 46 21 0

Visualizing legal employment outcomes in DC-Maryland-Virginia in 2017

This is the seventh in a series of visualizations on legal employment outcomes for the Class of 2017. Following posts on outcomes in Florida, Pennsylvania, Texas, New York, Illinois, and Ohio, here is a visualization for legal employment outcomes of graduates of DC, Maryland, and Virginia law schools for the Class of 2017. (More about the methodology is available at the Florida post.) Last year's DC-Maryland-Virginia post is here.

There were around 3410 graduates of law schools in the region, down from 3600 last year and 3740 for the Class of 2015, a 10% decline in just two years. Overall unfunded placement rose from 76.8% to 78.3%. Most of that growth came because of the declining number of graduates, but, as a positive improvement, J.D.-advantage placement dropped significantly as bar passage-required placement held steady. Georgetown continues its robust school-funded placement (40 jobs), well ahead of George Washington (9) & Virignia (8).

As always, please notify me of any corrections or errata.

Peer score School 2017 YoY% BPR JDA LSF 2016 BPR JDA LSF
4.3 University of Virginia 96.6% 0.6 271 7 8 96.1% 293 5 19
4.1 Georgetown University 89.0% 1.9 504 40 40 87.1% 486 38 44
3.2 Washington & Lee University 83.8% -0.4 79 4 0 84.2% 73 7 0
3.3 George Washington University 82.6% 2.8 422 69 9 79.8% 373 61 9
2.6 University of Richmond 81.9% 4.9 101 21 0 77.0% 95 19 0
2.8 Antonin Scalia Law School 80.9% -5.5 92 26 5 86.5% 86 25 4
3.2 William & Mary Law School 80.8% -0.6 158 10 0 81.3% 162 21 0
2.9 University of Maryland 77.8% -5.8 108 36 3 83.6% 126 51 1
1.3 Regent University 74.4% 3.9 46 10 2 70.5% 57 5 0
2.0 University of Baltimore 72.8% -3.4 136 27 0 76.2% 142 69 0
2.4 Howard University 69.9% 6.7 65 6 1 63.2% 65 20 1
1.2 Liberty University 69.0% 5.2 37 3 0 63.8% 32 4 1
2.8 American University 68.0% 1.7 197 56 0 66.3% 219 56 0
1.5 District of Columbia 66.2% 0.2 19 27 1 66.0% 33 30 1
2.2 Catholic University of America 64.0% -1.3 61 10 0 65.2% 53 37 0
1.2 Appalachian School of Law 59.5% 7.1 22 3 0 52.4% 15 7 0

Visualizing legal employment outcomes in Ohio in 2017

This is the sixth in a series of visualizations on legal employment outcomes for the Class of 2017. Following posts on outcomes in Florida, Pennsylvania, Texas, New York, and Illinois, here is a visualization for legal employment outcomes of graduates of Ohio law schools for the Class of 2017. (More about the methodology is available at the Florida post.) Last year's Ohio post is here.

There were around 950 graduates of Ohio's 9 law schools, down from around 1090 two years ago. That's helped placement in bar passage required and J.D. advantage jobs rise to 72.8% (including a few school-funded jobs), up three points. Overall jobs increased slightly. Remarkably, four of these law schools graduated classes of fewer than 100 students.

As always, please notify me of any corrections or errata.

Peer Score School 2017 YoY% BPR JDA LSF 2016 BPR JDA LSF
3.3 Ohio State University 88.5% -0.9 126 16 4 89.4% 137 20 3
2.3 University of Cincinnati 80.0% -3.7 53 3 0 83.7% 74 13 0
1.6 Ohio Northern University 76.9% 14.4 37 3 0 62.5% 35 10 0
1.9 University of Akron 71.7% 8.8 71 15 0 62.9% 58 20 0
2.7 Case Western Reserve University 71.0% 6.4 83 15 0 64.6% 56 8 0
1.9 University of Toledo 69.7% 9.5 42 11 0 60.2% 32 21 0
1.5 Capital University 65.8% 16.2 63 14 0 49.6% 45 14 0
1.9 Cleveland-Marshall College of Law 65.0% -2.6 61 15 0 67.5% 62 17 0
1.8 University of Dayton 62.5% -10.3 48 12 0 72.8% 40 19 0

February 2018 MBE bar scores collapse to all-time record low in test history

If that headline seems like déjà vu, it's because I wrote the same headline after the February 2017 MBE bar scores were released. There were some interesting comments last year about the best way to visualize the decline, so here are a couple of attempts below. (You can see more about the methodology choices in last year's post, including reasons it's a non-zero Y-axis, which would be absurd.)

We now know the mean scaled national February MBE score was 132.8, down 1.8 points from last year's 134.0, which was already an all-time record low. We would expect bar exam passing rates to drop in most jurisdictions.

For perspective, California's "cut score" is 144, Virginia's 140, Texas's 135, and New York's 133. The trend is more pronounced when looking at a more recent window of scores.

On the heels of an uptick in MBE scores last July, the results are particularly troubling. Given how small the February pool is in relation to the July pool, it's hard to draw too many conclusions from the February test-taker pool.

That said, the February cohort is historically much weaker than the July cohort, in part because it includes so many who failed in July and retook in February. Without knowing the percentage of repeaters, that would be the first place to look.

Another reason might relate to the increase in the July scores. Based on some informed speculation, some schools may have been advising some more at-risk students to delay taking the July exam and instead prepare more for the February exam in hopes of increasing first-time pass rates. If that happened, we may see a skewing in the quality of first-time test-takers in the February cohort, which would result in a decline in scores. That might explain some of the small improvement in July and decline in February.

At some point soon, however, we should see a more regular rebound in bar pass rates. The first major drop in bar exam scores was revealed to law schools in late fall 2014. That means the 2014-2015 applicant cycle, to the extent schools took heed of the warning, was a time for them to improve the quality of their incoming classes, leading to some improvement for the class graduating this May of 2018.

Of course, these are high-level projections and guesses. School-specific data would be useful. But it surely will not end the debates raging right now about the bar exam, and it will only serve to put more pressure on law schools looking at this July's bar exam.

UPDATE: NCBEX has revealed that first-time test-takers were 30% of the pool and saw a smaller decline than repeaters, but the number of repeaters was mostly unchanged. Karen Sloan has more.

Visualizing legal employment outcomes in Illinois in 2017

This is the fifth in a series of visualizations on legal employment outcomes for the Class of 2017. Following posts on outcomes in Florida, Pennsylvania, Texas, and New York, here is a visualization for legal employment outcomes of graduates of Illinois law schools for the Class of 2017. (More about the methodology is available at the Florida post.) Last year's Illinois post is here.

There were around 1750 graduates of Illinois's 9 law schools, down from around 1820 last year and 2040 in 2015. That's helped placement in bar passage required and J.D. advantage jobs rise to 79.3% (including a few school-funded jobs), up a point over last year. Overall jobs declined slightly.

As always, please notify me of any corrections or errata.

Peer Score School 2017 YoY% BPR JDA LSF 2016 BPR JDA LSF
4.6 University of Chicago 97.7% -2.3 197 4 8 100.0% 201 4 10
4.2 Northwestern University (Pritzker) 94.0% 1.6 204 24 5 92.4% 203 19 8
3.2 University of Illinois-Urbana-Champaign 86.6% -0.7 111 12 0 87.3% 131 13 1
2.5 Loyola University Chicago 77.5% 4.8 138 34 0 72.7% 119 32 1
1.7 Northern Illinois University 77.1% 1.0 42 11 1 76.1% 52 14 1
2.5 Illinois Institute of Technology (Chicago-Kent) 74.0% 3.3 133 32 0 70.7% 136 35 0
1.7 The John Marshall Law School 71.6% 6.0 161 43 0 65.5% 153 41 0
2.3 DePaul University 69.6% -0.5 126 34 0 70.1% 126 38 0
1.6 Southern Illinois University-Carbondale 58.6% -10.6 64 4 0 69.2% 67 14 0