After the latest release of Professor Greg Sisk’s scholarly impact measure for law school faculties, Professor Brian Leiter blogged a series of smaller rankings of individual faculty members in different scholarly areas. I thought I’d use the data for a quick look at the difference between measures of scholarly activity. The Sisk-Leiter method is this longstanding project; I thought I’d compare it to Google’s H5 index.
One major barrier to using Google Scholar is that it only works for those who create an account (absent using a time consuming back channel like Publish or Perish). But the two measures do different things.
Google Scholar index covers more works, including far more non-law-related works, than the Sisk-Leiter methodology. Google Scholar includes a number of non-legal and interdisciplinary works. It's a value judgment as to which metric ought to matter--or, perhaps, it's a reason to consider both and acknowledge they measure different things!
Google Scholar gives "credit" for an author being cited multiple times in a single piece; Sisk-Leiter only gives "credit" for one mention. The downside for Sisk-Leiter is that an author who has 12 of her articles published would receive credit in Google Scholar for 12 citations, but only 1 in Sisk-Leiter. On the flip side, an author who cites himself 12 times in a single piece would receive credit in Google Scholar for 12 citations, but only 1 in Sisk-Leiter--and, I think, self-citations are, on the whole, less valuable when measuring "impact."
Google Scholar covers all authors; Sisk-Leiter excludes names omitted in et al. There is a method to help mitigate this concern, but, again, this tends to benefit interdisciplinary scholars in Google Scholar, and tends to benefit (through omission) the more typical sole-author law pieces in Sisk-Leiter. That said, Professor Leiter updated his blog’s rankings with some corrections from Professor Ted Sichelman.
Google Scholar includes references to indexed recognized scholarship; Sisk-Leiter extends to all mentions, including blog posts or opinion pieces typically not indexed in Google Scholar. It's another value judgment as to which metric ought to matter. In this dimension, Sisk-Leiter can be broader than Google Scholar might be.
Sisk-Leiter offers a greater reward for a few highly-cited works; H5 offers a greater reward for breadth and depth of citations. This is a specific measure for H5 in Google Scholar as opposed to Google Scholar more generally. Google Scholar also measures citations in the last five years. But I chose to compare Sisk-Leiter to the Google H5 index instead of the C5 (citations in the last five years) index. H5 measures how many (X) pieces have received at least X citations in the last 5 years. So if you have 10 articles that have each received at least 10 citations since 2013, your H5 index is 10. It doesn’t matter if your 11th piece has 9 citations; it doesn’t matter if one of your 10 pieces has 10,000 citations. It’s a measure of depth and breadth, different in kind than total citations.
In the chart below, I logged the Sisk-Leiter citations and compared them to the Google H5 index. I drew from about 85 scholars who both appeared in a Leiter rankings and had a public Google Scholar profile, and I looked at their Google Scholar profiles this fall (which may mean that figures are slightly off from today’s figures). Google Scholar is also only as good as the profiles are, so if scholars have failed to maintain their profile with recent publications, it may understate their citations. I highlighted in blue circles those identified in the Leiter rankings as age 50 and under.
I included a trendline to show the relationship between the two sets of citations. Those “above” the line are those with higher Sisk-Leiter scores than Google H5 index scores and “benefit", in a sense from the use of this metric over Google H5. Those “below” the line, in contrast, are those who would “benefit” more from the use of Google H5. At a glance, it’s worth considering that perhaps more “pure law” scholars are above the line and more interdisciplinary scholars below it—not a judgment about one or the other, and only a great generalization, but one way of thinking about how we measure scholarly impact, and perhaps reflects a benefit of thinking more broadly about faculty impact. Recall, too, that this chart selectively includes faculty, and that some citation totals vary wildly due to the particular fields scholars write in. The usual caveats about the data apply—there are weaknesses to every citation metric, and this is just a way of comparing a couple of them.