Comparing Google Scholar's H5 index to Sisk-Leiter citations

After the latest release of Professor Greg Sisk’s scholarly impact measure for law school faculties, Professor Brian Leiter blogged a series of smaller rankings of individual faculty members in different scholarly areas. I thought I’d use the data for a quick look at the difference between measures of scholarly activity. The Sisk-Leiter method is this longstanding project; I thought I’d compare it to Google’s H5 index.

One major barrier to using Google Scholar is that it only works for those who create an account (absent using a time consuming back channel like Publish or Perish). But the two measures do different things.

Google Scholar index covers more works, including far more non-law-related works, than the Sisk-Leiter methodology. Google Scholar includes a number of non-legal and interdisciplinary works. It's a value judgment as to which metric ought to matter--or, perhaps, it's a reason to consider both and acknowledge they measure different things!

Google Scholar gives "credit" for an author being cited multiple times in a single piece; Sisk-Leiter only gives "credit" for one mention. The downside for Sisk-Leiter is that an author who has 12 of her articles published would receive credit in Google Scholar for 12 citations, but only 1 in Sisk-Leiter. On the flip side, an author who cites himself 12 times in a single piece would receive credit in Google Scholar for 12 citations, but only 1 in Sisk-Leiter--and, I think, self-citations are, on the whole, less valuable when measuring "impact."

Google Scholar covers all authors; Sisk-Leiter excludes names omitted in et al. There is a method to help mitigate this concern, but, again, this tends to benefit interdisciplinary scholars in Google Scholar, and tends to benefit (through omission) the more typical sole-author law pieces in Sisk-Leiter. That said, Professor Leiter updated his blog’s rankings with some corrections from Professor Ted Sichelman.

Google Scholar includes references to indexed recognized scholarship; Sisk-Leiter extends to all mentions, including blog posts or opinion pieces typically not indexed in Google Scholar. It's another value judgment as to which metric ought to matter. In this dimension, Sisk-Leiter can be broader than Google Scholar might be.

Sisk-Leiter offers a greater reward for a few highly-cited works; H5 offers a greater reward for breadth and depth of citations. This is a specific measure for H5 in Google Scholar as opposed to Google Scholar more generally. Google Scholar also measures citations in the last five years. But I chose to compare Sisk-Leiter to the Google H5 index instead of the C5 (citations in the last five years) index. H5 measures how many (X) pieces have received at least X citations in the last 5 years. So if you have 10 articles that have each received at least 10 citations since 2013, your H5 index is 10. It doesn’t matter if your 11th piece has 9 citations; it doesn’t matter if one of your 10 pieces has 10,000 citations. It’s a measure of depth and breadth, different in kind than total citations.

In the chart below, I logged the Sisk-Leiter citations and compared them to the Google H5 index. I drew from about 85 scholars who both appeared in a Leiter rankings and had a public Google Scholar profile, and I looked at their Google Scholar profiles this fall (which may mean that figures are slightly off from today’s figures). Google Scholar is also only as good as the profiles are, so if scholars have failed to maintain their profile with recent publications, it may understate their citations. I highlighted in blue circles those identified in the Leiter rankings as age 50 and under.

I included a trendline to show the relationship between the two sets of citations. Those “above” the line are those with higher Sisk-Leiter scores than Google H5 index scores and “benefit", in a sense from the use of this metric over Google H5. Those “below” the line, in contrast, are those who would “benefit” more from the use of Google H5. At a glance, it’s worth considering that perhaps more “pure law” scholars are above the line and more interdisciplinary scholars below it—not a judgment about one or the other, and only a great generalization, but one way of thinking about how we measure scholarly impact, and perhaps reflects a benefit of thinking more broadly about faculty impact. Recall, too, that this chart selectively includes faculty, and that some citation totals vary wildly due to the particular fields scholars write in. The usual caveats about the data apply—there are weaknesses to every citation metric, and this is just a way of comparing a couple of them.

"Cyber Interference in Elections and Federal Agency Action"

I have this piece up at the Harvard Law Review Blog, Cyber Interference in Elections and Federal Agency Action. It begins:

Pop quiz: which part of the federal government is tasked with preventing cyber interference in our elections?

Congress has refused to say. We have reached a point of a significant gap between an important federal need and existing federal power. And in the absence of that federal power, federal agencies have stepped into the gap and extended their authority into domains unanticipated by Congress.

Forthcoming article: "The Democracy Ratchet"

Over at SSRN, I've posted a draft of The Democracy Ratchet, forthcoming in the Indiana Law Journal. Comments welcome! The abstract:

Litigants seeking to lift burdens on the right to vote and judges adjudicating these claims have an unremarkable problem—what is the benchmark for measuring the nature of these burdens? Legal theories abound for claims under the constellation of rights known as the "right to vote." And when a legislature changes a voting practice or procedure, courts may have an easy benchmark—they can consider what the right to vote looked like before and after the enactment of the new law, and they can evaluate a litigant’s claim on that basis. Recently, federal courts have been relying on this benchmark for the principal causes of action litigants might raise after a new law has been enacted—a Section 2 challenge under the Voting Rights Act, a freedom of association claim subject to the Burdick balancing test, and an Equal Protection analysis derived from Bush v. Gore. And frequently, courts have found that new laws that eliminate once-available voting practices or procedures fail.

I describe this new practice as the Democracy Ratchet. But it is only recently that a convergence of factors have driven courts to (often unwittingly) adopt the Democracy Ratchet more broadly. So while a legislature can expand such opportunities, courts scrutinize cutbacks on such opportunities with deep skepticism—deeper than had no such opportunity ever existed. The ratchet tightens options, squeezing the discretion that legislatures once had.

This Article seeks to solve the puzzle of how courts have scrutinized, and should scrutinize, legislative changes to election laws. Part I identifies recent instances in which federal courts have invoked a version of the Democracy Ratchet. It identifies the salient traits of the Democracy Ratchet in these cases. Part II describes why the Democracy Ratchet has gained attention, primarily as a tactic of litigants and as a convenient benchmark in preliminary injunction cases. Part III examines of the history of the major federal causes of action concerning election administration—Section 2 of the Voting Rights Act, the Burdick balancing test, and the Equal Protection Clause. In each, it traces the path of the doctrine to a point where a version of the Democracy Ratchet might be incorporated into the test. It concludes that these causes of action do not include a substantive Democracy Ratchet. Part IV turns to determine how the Democracy Ratchet might be used. It concludes that the Democracy Ratchet is best identified as an evidentiary device and a readily-available remedy for courts fashioning relief. It then offers suggestions for its appropriate use. Part V identifies some concerns with existing use of the Democracy Ratchet and instances in which it may be incorrectly used. It offers guidance for courts handling changes to election laws. Part VI concludes.

Which Supreme Court justices are the topic of the most academic articles?

A recent draft article about Justice Kennedy's influence and legacy sparked a social media discussion about which justices attract the most academic attention.

I looked at the Westlaw database and searched for "ti(justice /2 [lastname])." The "ti()" field is slightly broader than just looking at the title of the article alone, but for this purpose it captured almost exclusively articles with a justice's name in the title. I didn't distinguish between Chief Justice Rehnquist's time as an Associate Justice (with that title), but I added a "chief" to my Roberts search to separate out hits for Justice Owen Roberts (there was at least one...). This search also would typically remove results for "the Rehnquist Court" or "the Roberts Court," which are less about the chief justices in particular, but it may slightly undercapture articles about those two justices. UPDATE: This methodology somewhat undercaptures references to justices that are more colloquial (e.g., use just the last name without the title "justice") or include the justice's name as an author without a title in a book review, but it eliminates far more false positives for most other justices who have more common names than "Scalia," "Alito," and "Sotomayor."

I imagine there might also a logarithmic effect one might observe--or expect to observe--over the course of a justice's career. As a justice begins, few, if any, articles will be written about her; as her influence increases over time, we would expect to see more articles each year than the previous year. (There may also be separate bursts of scholarship around a justice's retirement and around a justice's death.) This metric is static and treats each year as the same--perhaps someone would spend more time analyzing year-by-year impact! Additionally, the increase in the volume of journals, particularly online journals available on Westlaw, may skew results for justices with more recent histories.

I narrowed my search to the "Law Reviews & Journals" database, which is broad enough to include some practitioners' publications and the ABA Journal but should work for a rough examination of justices. I then developed a charge, "Law Journal Article Title Mentions Per Year," with the denominator the years since that justice first joined the Supreme Court. I selected all of the most recent 13 justices (excluding Justice Gorsuch) to have served on the Court.

UPDATE: I transposed the dates for Justices Sotomayor and Kagan in an earlier version, and their data changed slightly.

I then added in Justice Gorsuch and looked simply at the raw citation totals, regardless of years' services.

I'll leave it to others to discern what these figures might mean, if anything. I'll note that Justice Scalia dwarfs all others, which was not surprising, but a few other results did mildly surprise me. A few pieces about Justice Gorsuch were apparently written in his days as a prospective, or actual, nominee, then converted into pieces with a title about what "Justice Gorsuch," rather than "Judge Gorsuch," has written or said in the past--one reason his number is at 5 after one years' service, to Justice Kagan's 4.

UPDATE: Dave Hoffman rightly points out that there are likely tens of thousands of law journal articles published every single year. Due to the search function in Westlaw, I could not even limit a search of articles mentioning "law" in a given year, because results cannot exceeds 10,000 hits, and I can't narrow the search database to Law Reviews & Journals in WestlawNext. Articles about a Supreme Court justice, then, are a tiny slice of all scholarship. UPDATE: And, of course, this is only a very crude metric that is assuredly overinclusive and underinclusive. But, the relative relationships between justices should be modestly illuminating.

New essay draft: "Legal Quandaries in the Alabama Senate Election of 2017"

I have posted a new essay forthcoming in the Alabama Law Review, entitled Legal Quandaries in the Alabama Senate Election of 2017. Here is the abstract:

President Donald Trump’s decision to nominate Alabama Senator Jeff Sessions as his Attorney General resulted in a vacancy in the Senate and triggered a special election. The special election, however, revealed the many complexities of the Seventeenth Amendment, special elections generally, and Alabama state law specifically.

This Article traces a series of legal quandaries that arose from the special election, some of which remain open questions for future Alabama elections, and for United States Senate elections more generally. Part I examines the scope of the Alabama Governor’s power to call for a special election under the Seventeenth Amendment and state law. Part II scrutinizes the complications for replacing a late-withdrawing candidate and for counting votes cast for a candidate who resigns. Part III identifies proposed gambits, from postponing the election to write-in campaigns, that never came to fruition. Part IV examines the timing surrounding certification of election results in Alabama. Part V looks at gaps in Alabama’s recount and election contest procedures. Finally, Part VI identifies the most significant opportunities to clarify Alabama law and to properly interpret the Seventeenth Amendment to avoid uncertainty in future elections.

I have a very short turnaround before submitting the final draft for editing, but I welcome any comments or feedback!

Draft work in progress: "The High Cost of Lowering the Bar"

My colleague Rob Anderson and I have posted a draft article, The High Cost of Lowering the Bar on SSRN. From the abstract:

In this Essay, we present data suggesting that lowering the bar examination passing score will likely increase the amount of malpractice, misconduct, and discipline among California lawyers. Our analysis shows that bar exam score is significantly related to likelihood of State Bar discipline throughout a lawyer’s career. We investigate these claims by collecting data on disciplinary actions and disbarments among California-licensed attorneys. We find support for the assertion that attorneys with lower bar examination performance are more likely to be disciplined and disbarred than those with higher performance.

Although our measures of bar performance only have modest predictive power of subsequent discipline, we project that lowering the cut score would result in the admission of attorneys with a substantially higher probability of State Bar discipline over the course of their careers. But we admit that our analysis is limited due to the imperfect data available to the public. For a precise calculation, we call on the California State Bar to use its internal records on bar scores and discipline outcomes to determine the likely impact of changes to the passing score.

We were inspired by the lack of evidence surrounding costs that may be associated with lowering the "cut score" required to pass the California bar, and we offered this small study as one data point toward that end. The Wall Street Journal cited the draft this week, and we've received valuable feedback from a number of people. We welcome more feedback! (We also welcome publication offers!)

The paper really does two things--identifies the likelihood of discipline associated with the bar exam score, and calls on the State Bar to engage in more precise data collection and analysis when evaluating the costs and benefits of changing the cut score.

It emphatically does not do several things. For instance, it does not identify causation and identifies a number of possible reasons for the disparity (at pp. 12-13 of the draft). Additionally, it simply identifies a cost--lower the cut score will likely increase attorneys subject to discipline. It does not make any effort to weigh that cost--it may well be the case that the State Bar views the cost as acceptable given the trade-off of benefits (e.g., more attorneys, more access to justice, etc.) (see pp. 11-12 of the draft). Or it might be the case that the occupational licensing of the state bar and the risk of attorney discipline should not hinge on correlation measures like bar exam score.

There are many, for instance, who have been thoughtfully critically of the bar exam and would likely agree that our findings are accurate but reject that they should be insurmountable costs. Consider thoughtful commentary from Professor Deborah Jones Merritt at the Law School Cafe, who has long had careful and substantive critiques about the use of the bar exam generally.

It has been our hope that these costs are addressed in a meaningful, substantial, and productive way. We include many caveats in our findings for that reason.

Unfortunately, not everyone has reacted to this draft that way.

The Daily Journal (print only) solicited feedback on the work with a couple of salient quotations. First:

Bar Trustee Joanna Mendoza said she agreed the study should not be relied on for policy decisions.

“I am not persuaded by the study since the professors did not have the data available to prove their hypothesis,” she said.

We feel confident in our modest hypothesis--that attorneys with lower bar exam scores are subject to higher rates of discipline. We use two methods to support this. We do not have individualized data that would allow us the precision of measuring the precise effect, but we are confident in this major hypothesis.

Worse, however, is the disappointing answer. Our draft expressly calls on the State Bar to study the data! While we can only roughly address the impact at the macro level, we call on the bar to use data for more precise information! We do hope that the California State Bar would do so. But it appears it will not--at least, not unless it has already planned on doing so:

Bar spokeswoman Laura Ernde did not directly address questions about the Pepperdine professors’ study or their call for the bar to review its internal data, including non-public discipline. Ernde wrote in an email that the agency would use its ongoing studies to make recommendations to the Supreme Court about the bar exam.

Second are the remarks from David L. Faigman, dean of the University of California Hastings College of Law. Dean Faigman has been one of the most vocal advocates for lowering the cut score (consider this Los Angeles Times opinion piece.) His response:

Among his many critiques, Faigman said the professors failed to factor in a number of variables that impact whether an attorney is disciplined. 

“If they were to publish it in its current form, it would be about as irresponsible a product of empirical scholarship I could imagine putting out for public consumption,” Faigman said. “God forbid anybody of policy authority should rely on that manuscript.”

It's hard to know how to address a critique when the epithet "irresponsible" is the substance of the critique.

We concede many variables that may cause attorney discipline (pp. 12-13), and the paper makes no attempt to address that. Instead, we're pointing out that lower bar scores correlate with higher discipline rates; and lowering the score further would likely result in still higher discipline rates. Yes, many factors go into discipline--but the consequence of lowering the cut score will still remain, a consequence of higher discipline.

And our call for policy authorities to "rely" on the manuscript is twofold--to consider that there are actual costs to lowering the cut score, and to use more data to more carefully evaluate those costs. Both, I think, are valuable things for a policy authority to "rely" upon.

We hope that the paper sparks a more nuanced and thoughtful discussion than the one that has been waged in lobbying the State Bar and state legislature so far. We hardly know what the "right" cut score is, or the full range of costs and benefits that arise at varying changes to the cut score of the bar exam. But we hope decisionmakers patiently and seriously engage with these costs and benefits in the months--and, perhaps ideally, years--ahead.

"Natural Born" Disputes in the 2016 Presidential Election

I've posted a draft of a new article, "Natural Born" Disputes in the 2016 Presidential Election, forthcoming in the Fordham Law Review. Here is the abstract:

The 2016 presidential election brought forth new disputes concerning the definition of a "natural born citizen." The most significant challenges surrounded the eligibility of Senator Ted Cruz, born in Canada to a Cuban father and an American mother. Administrative challenges and litigation in court revealed deficiencies in the procedures for handling such disputes. This paper exhaustively examines these challenges and identifies three significant complications arising out of these disputes.

First, agencies tasked with administering elections and reviewing challenges to candidate eligibility often construed their own jurisdiction broadly, but good reasons exist for construing such jurisdiction narrowly given ample political and legal opportunities to review candidates' qualifications. while litigation in federal court usually led to swift dismissal on a procedural ground, challenges in state proceedings sometimes led to broad—and incorrect—pronouncements about the power to scrutinize the eligibility of presidential candidates. Third, decision makers repeatedly mused about how useful it would be if the Supreme Court offered a clear definition of a "natural born citizen." This suggests that executive and judicial actors are uncomfortable with non-federal judicial resolution of a constitutional claim like this one.

Finally, this Article offers a recommendation. After three consecutive presidential election cycles with time-consuming and costly litigation, it may well be time to amend the Constitution and abolish the natural born citizen requirement. Amending the Constitution is admittedly no simple task. But perhaps an uncontroversial amendment would find broad support in order to avoid delays and legal challenges seen in recent presidential primaries and elections.

The twenty-two (or twenty-three) law reviews you should follow on Twitter (2016)

While you could follow a pretty sizeable list of law reviews I've maintained on Twitter, there are a handful of law reviews that rise above the rest.

Last year, I listed the twenty-two law reviews to follow on Twitter. I've modified the criteria slightly and updated it. I've mentioned that I find Twitter one of the best places to stumble upon scholarship and engage in a first layer of discussion about new ideas.

In my view, it continues to surprise me how challenging it is to find recently journal content. Many journals don't maintain a Twitter feed, much less a decent web site--most lack an RSS, are updated infrequently at best, and often include stock art (because, apparently, law reviews are into stock art?). Given scarce resources that law schools have today, one might expect schools to find ways of maximizing the value from their investments in their journals. (More on this soon.)

Alas, I'll settle for the occasional tweet on the subject. I looked at the flagship law reviews at the 106 law schools with a U.S. News & World Report peer score of 2.2 or higher.  If I found their Twitter accounts, I included them. I then examined how many tweets they had, how many followers they had, and when their last tweet (not a retweet) took place. I then created a benchmark, modified slightly from last year: the law reviews "worth following" are those with at least 200 tweets, at least 200 followers, and at least one tweet (not a retweet or direct reply) in the last 45 days (as of July 1, 2016). I thought that would be a pretty minimal standard for level of engagement and recency of engagement. This 200/200/45 standard reduces the list to 23 accounts worth following (UPDATE: the original list was just 22, but I found one more thanks to Elli Olson):

Harvard Law Review

Yale Law Journal

University of Chicago Law Review

NYU Law Review

California Law Review

Penn Law Review

Michigan Law Review

Northwestern University Law Review

Georgetown Law Journal

UCLA Law Review

George Washington Law Review

Ohio State Law Journal

Iowa Law Review

University of Illinois Law Review

Hastings Law Journal

Washington & Lee Law Review

Connecticut Law Review

Case Western Reserve Law Review

Georgia State University Law Review

Nebraska Law Review

St. Louis University Law Journal

Syracuse Law Review

Michigan State Law Review

It's fairly notable, I think, that a majority of the schools on this list have a top-30 peer reputation score. Indeed, follower count is highly correlated with peer score (0.59)! There is also a high degree of continuity between last year's list and this year's list, showing, I think, that continuity matters for many of these journals' social media presence--and, perhaps, that it's harder for many journals to get anything started with a lasting institutional memory.

Below is the complete list of these journals, with 200/200/45 law reviews highlighted. If you see a journal not listed, tweet me about it @derektmuller.

Peer score Journal Tweets Followers Last tweet (not RT)
4.8 @HarvLRev 850 18900 June 23, 2016
4.8 @YaleLJournal 792 8676 June 23, 2016
4.8 @StanLRev 458 5722 April 29, 2016
4.6 @UChiLRev 349 4666 June 27, 2016
4.6 @ColumLRev 294 3747 October 31, 2015
4.5 @nyulawreview 1415 6390 June 29, 2016
4.5 @CalifLRev 398 2918 May 20, 2016
4.4 @PennLawReview 413 2837 May 21, 2016
4.4 @michlawreview 265 2254 June 21, 2016
4.3 @VirginiaLawRev 42 670 June 1, 2016
4.2 @NULRev 241 1015 June 30, 2016
4.2 @DukeLawJournal 66 1172 June 13, 2016
4.2 @CornellLRev 0 11 n/a
4.1 @GeorgetownLJ 325 1080 June 23, 2016
4.0 @TexasLRev 458 2059 April 28, 2016
3.9 @UCLALawReview 223 2332 June 23, 2016
3.9 Vanderbilt  
3.6 @emorylawjournal 85 211 June 28, 2016
3.5 @MinnesotaLawRev 112 625 June 29, 2016
3.5 Washington (St. Louis)  
3.4 @BULawReview 530 1232 October 19, 2015
3.4 @nclrev 76 160 March 28, 2016
3.4 @NotreDameLawRev 52 532 April 22, 2016
3.4 @SCalLRev 11 108 May 3, 2016
3.4 Wisconsin  
3.3 @GWLawReview 670 710 June 15, 2016
3.3 @OhioStateLJ 627 1486 June 29, 2016
3.3 @IowaLawReview 273 1196 May 16, 2016
3.3 @UCDavisLawRev 166 430 January 29, 2016
3.3 Indiana (Bloomington)  
3.2 @BCLawReview 372 1439 April 4, 2016
3.2 @WashLawReview 126 1175 May 21, 2015
3.2 @AlaLawReview 44 633 March 28, 2016
3.2 @GaLRev 32 411 April 4, 2016
3.2 Irvine  
3.2 William & Mary  
3.1 @fordhamlrev 381 2010 May 2, 2016
3.1 @UIllLRev 259 1202 May 29, 2016
3.1 @HastingsLJ 207 518 June 29, 2016
3.1 @FloridaLawRev 122 306 June 28, 2016
3.1 @arizlrev 32 242 July 1, 2015
3.1 @ArizStLJ 29 10 April 18, 2016
3.1 Colorado  
3.0 @WFULawReview 793 694 April 23, 2016
3.0 @WLU_LawReview 279 225 June 28, 2016
3.0 @TulaneLawReview 40 611 March 6, 2015
3.0 Maryland  
2.9 Florida State  
2.8 @BYULRev 42 103 May 11, 2016
2.8 @UtahLawReview 0 6 n/a
2.7 @ConnLRev 854 1188 May 25, 2016
2.7 @AmULRev 356 935 November 13, 2015
2.7 @geomasonlrev 219 258 February 18, 2016
2.7 @UMLawReview 193 946 June 3, 2016
2.7 @denverlawreview 153 686 April 26, 2016
2.7 @CardozoLRev 118 1079 June 28, 2016
2.7 @ukanlrev 105 528 September 25, 2014
2.7 @OregonLawReview 7 372 April 7, 2015
2.7 Tennessee  
2.6 @CaseWResLRev 821 840 June 23, 2016
2.6 @GSULawReview 646 268 June 29, 2016
2.6 @PeppLawReview 604 751 April 1, 2016
2.6 @TempleLawReview 38 67 May 18, 2016
2.6 @KYLawJournal 17 157 March 20, 2012
2.6 @LLSlawreview 11 25 March 23, 2016
2.6 @MoLawRev 11 42 June 7, 2016
2.6 @Houston_L_Rev 5 35 April 28, 2016
2.6 @PittLawReview 0 15 n/a
2.6 San Diego  
2.6 SMU  
2.5 @NebLRev 253 258 June 23, 2016
2.5 @LUCLawJournal 169 131 May 21, 2014
2.5 Chicago-Kent  
2.5 Hawaii  
2.4 @SCLawReview 541 892 April 14, 2016
2.4 @NevLawJournal 70 145 May 22, 2016
2.4 @RutgersLRev 63 631 April 3, 2015
2.4 @nuljournal 47 346 June 22, 2016
2.4 @RutgersLJ 12 526 May 2, 2014
2.4 @BrookLRev 0 4 n/a
2.4 Baylor  
2.4 Cincinnati  
2.4 Indiana (Indianapolis)  
2.4 Lewis & Clark  
2.4 Oklahoma  
2.4 Richmond  
2.4 Santa Clara  
2.3 @SLULawJournal 639 558 June 7, 2016
2.3 @SyracuseLRev 538 946 June 26, 2016
2.3 @HULawJournal 532 715 November 4, 2015
2.3 @MichStLRev 402 780 June 11, 2016
2.3 @SULawRev 53 91 February 20, 2016
2.3 @VillanovaLawRev 44 150 June 25, 2016
2.3 @SHULawReview 22 203 January 28, 2014
2.3 Marquette  
2.3 New Mexico  
2.2 @arklawrev 165 1839 February 15, 2016
2.2 @MaineLawReview 103 650 December 3, 2015
2.2 @lalawreview 95 910 March 24, 2016
2.2 @MSLawJournal 70 223 April 13, 2016
2.2 @UofL_Law_Review 16 46 March 31, 2016
2.2 @UMKCLawReview 5 91 April 22, 2016
2.2 @WVU_Law_Rev 4 26 Jule 28, 2013
2.2 DePaul  
2.2 Hofstra  
2.2 SUNY (Buffalo)