My advice to students looking to enroll in classes

I offered a few thoughts on Twitter recently about advice to students looking to enroll in classes. It became popular advice, and then some people added nuance or qualifications, so I thought an extended discussion here might be warranted. While I teach at a law school and think specifically about that, the advice can work well for higher education generally.

1. Take the professor, not the course. In my seven years of higher education, I never regretted a course I took with a professor I liked in an area outside of my specialties or interests; and I’d say all of my least-favorite courses came in courses I felt like I “had” to take or “ought” to take for one reason or another. The quality of the professor often makes or breaks a course. In my conversations with students about their favorite and least favorite courses, it usually turns on naming the professor rather than the contents of the course.

There is a risk that this becomes some kind of cult of personality around faculty. But I do think we are inclined to learn best from the people we best understand, or whose teaching style is most interesting to us.

There is a risk, too, that we ignore courses that are essential for our major or for an area of legal practice. But I don’t worry too much about that (but it does give me some pause). For one, if you like all the faculty in a different area—criminal law when you want to be a corporate attorney, for instance—maybe you picked the wrong field or the wrong school…. And there are some courses that are simply unavoidable, often because they are required. And there are courses I really valued taking as courses I felt would help my career—federal courts and criminal procedure in anticipation of my clerkship, for instance (even though I did like the professors!). But I advise students to be cautious when thinking about course selection.

2. Find courses with writing and substantial revision requirements. Who hasn’t been the student relieved that they have no exams and only paper courses? But school—particularly, again, I think of law school—is a tremendous opportunity to improve one’s writing ability without the pressures of, say, a demanding client or boss frustrated with your writing ability! Writing opportunities, then, are terrific places to improve this craft. But it’s not just dumping words onto a page at the end of the semester. Find courses that also include revision requirements—a draft due early, a professor’s feedback about the piece’s strengths and weaknesses, and an opportunity to improve it. In law school, this is an essential component of legal research and writing. But finding such opportunities in the upper-division curriculum requires you to seek them out—and requires faculty willing to incorporate draft revision in the syllabus rather than simply expecting some paper at the end.

3. Pick a schedule with course times that help your self-discipline. I loved 8 am courses in school. In college, they helped keep me on a disciplined schedule and ensured I didn’t skip breakfast on my meal plan. In law school, they kept me and my wife (who worked) on similar schedules. I liked morning courses because I paid attention best then; I liked doing homework in the afternoon. I liked scheduling classes every day because it forced me to get into school every day to study. In short, I found out what worked best for me and made sure I planned schedules around it. Too often, it’s tempting to develop schedules around what is convenient. Convenience may be important, but self-discipline—developing habits that will help you avoid your own weaknesses or temptations, like procrastination or laziness—is crucial to future success.

4. Do not assume an elective will be offered next year: take it now. You’re looking at the schedule, and you see a neat course with a professor you like. But it’s an inconvenient time, or it runs up against a requirement in your discipline, or whatever it is. And you think, “Well, I’ll just take it next year.” Don’t do that. Don’t! Schedules are fickle things. Faculty lateral to another institution, go on sabbatical, visit elsewhere, take parental leave, or retire. There’s a deficiency in another area, so faculty give up the elective to help teach something else. Research interests shift. Low interest means the course isn’t offered again. There are a thousand reasons that there is no guarantee that next year will allow this course to return. So take it now (if you can).

*

There are of course many, many factors to consider when scheduling courses. (Many on Twitter have been suggesting other considerations, too.) But these are four of my most common pieces of advice and things that can help improve one’s experience.

The relationship between 1L class size and USNWR peer score

Professor Robert Jones has tracked the peer scores for law schools as reported by USNWR over time. That is, each year, each law school receives a peer score on a scale of 1 to 5, and we can see how that score has changed over time. He’s also tracked the average of all peer scores. That gives us an interesting idea—what do law professors think about the state of legal education as a whole? Perhaps schools are always getting better and we’d see peer scores climb; perhaps schools increasingly rate each other more harshly as they compete to climb the rankings and we’d see the peer scores drop. (There’s some evidence law faculties are pretty harsh about most law schools!)

At the risk of offering a spurious correlation, I noticed that the average score appeared to rise and fall with the conditions of the legal education market. The easiest way to track that would be to look at the overall 1L incoming class size the year the survey was circulated.

You can make a lot of correlations look good on two axes if you shape the two axes carefully enough. But there’s a good relationship between the two here. As the legal education market rose from 2006 to 2010 with increasingly large 1L class sizes, peer scores roughly trended upwards. As the market crashed through 2014, peer scores dropped. Now, the market has modestly improved—and peer scores have moved up much more quickly, perhaps reflecting optimism.

All this is really just speculation about why peer scores would change, on average, more than 0.1 in a single decade, or why they’d move up and down. Intuitively, the fact that peer scores may improve as the legal academy feels better about the state of legal education, or worsen as it feels worse, seems to make sense. There are far better ways to investigate this claim, but this relationship struck me as noteworthy!

"Nonjudicial Solutions to Partisan Gerrymandering"

I have posted this draft of an Essay forthcoming in the Howard Law Journal, “Nonjudicial Solutions to Partisan Gerrymandering.” Here is the abstract:

This Essay offers some hesitation over judicial solutions to the partisan gerrymandering, hesitation consistent Justice Frankfurter’s dissenting opinion in Baker v. Carr. It argues that partisan gerrymandering reform is best suited for the political process and not the judiciary. First, it traces the longstanding roots of the problem and the longstanding trouble the federal judiciary has had engaging in the process, which cautions against judicial intervention. Second, it highlights the weaknesses in the constitutional legal theories that purport to offer readily-available judicially manageable standards to handle partisan gerrymandering claims. Third, it identifies nonjudicial solutions at the state legislative level, solutions that offer more promise than any judicial solution and that offer the flexibility to change through subsequent legislation if these solutions prove worse than the original problem. Fourth, it notes weaknesses in judicial engagement in partisan gerrymandering, from opaque judicial decisionmaking to collusive consent decrees, that independently counsel against judicial involvement.

This Essay is a contribution to the Wiley A. Branton/Howard Law Journal Symposium, "We The People? Internal and External Challenges to the American Electoral Process." is

A continuing trickle of law school closures

UPDATE: as of late 2019, Western State is under new ownership and successfully petitioned for ABA accreditation, which renders portions of this post inaccurate.

One year ago today—March 22, 2018—I reflected on the “trickle” of law school closures. Some campuses closed (Cooley’s Ann Arbor branch, Atlanta’s John Marshall’s Savannah branch), two schools merged into one (William Mitchell and Hamline), and others announced their closure (Indiana Tech, Whittier, Charlotte, and Valparaiso). In the last year, Arizona Summit and Western State have announced their closures.

Western State closing two years after Whittier is a remarkable turn for legal education in Orange County, California. Orange County, with more than 3 million residents, is one of the most populous and fastest-growing counties in the United States.

California has long had a number of state-accredited schools in the state, schools that do not have full ABA accreditation. Western State has been around since the 1970s but was not the first school to gain full ABA accreditation—that was Whittier in 1978. Western State joined newcomer Chapman as fully accredited in 1998. Then UC-Irvine was accredited in 2011. But now two of those four schools have closed.

While we are a long way from the recession, and while law school enrollment has stabilized (and slightly improved) over the last few years, there remain longstanding pressures on legal education, in part from the legacy of the recession—small class sizes can only be sustained so long, scholarships have increased to attract students, the transfer market has disproportionately impacted more marginal schools, lower credentials of incoming students have translated into systemic lower bar passage rates, and so on.

We may still see a few more closures in the years ahead—for-profit schools schools have borne the brunt of the closures, but we’ll see what happens in the months to come.

The new arms race for USNWR law specialty rankings

The USNWR law “specialty” rankings long operated this way: schools would identify one faculty member whose specialty matched one of the various USNWR specialty categories (legal writing, trial advocacy, tax, etc.). USNWR would send a survey to those faculty asking them to list up to 15 of the top schools in those areas. USNWR would then take the top half of those schools who received a critical mass of votes, and rank them based upon who received the most votes—just ordinal rank with no total votes listed. For many specialty areas, that meant 10 to 20 schools. And for the other 180 to 190 schools, that meant blissful ignorance.

USNWR changed that methodology this year in a couple of ways. First, its survey asks voters to rank every school on the basis of this specialty on a scale of 1 to 5, similar to how the peer reputation survey works. Second, it ranks all the schools that received a critical mass of votes (i.e., about 10 votes—and most law professors are not shy about rating most schools). Third, it now lists that reputation score, ties and all.

The result is that almost all schools are ranked in almost all categories. And now your school might be 33d or 107th or 56th or something in a category.

The result in some categories is comical compression in some categories. A score of 2.0 (out of 5) gets you 91st in International Law, and a score of 1.0 (the bottom) gets you to 177th. Ties are abundant—after all, there are usually at least 180 schools ranked, and given that the scale is from 5.0 to 1.0, and that virtually all schools are in the 4.0 to 1.0 range, there are going to be a lot of ties.

Worse, now schools can advertise their top X program, when X in the past typically wouldn’t drop past 10 to 20. Now, top 30, top 50, top 100 all earn bragging rights.

So now there’s a new arms race. Schools know exactly where they sit in this year’s survey, how tantalizing close the next tranche of the ratings are (because of the ties), how much higher that ranking is (again, because of the ties), and the temptation to pepper prospective voters with more marketing materials in the ever-escalating race to climb the ranks of a new set of specialty rankings. In the past, it was blissful ignorance for those below 20th. Today, it’s all laid bare.

Perhaps I’m wrong. Maybe schools will mostly ignore the change to the specialty rankings. The compression and ties alone should cause most ot ignore them. But, I doubt it. The allure of rankings and the temptation of marketing departments to boast to prospective students and alumni about some figure (especially if that figure is higher than the overall USNWR rank) will, I think, overwhelm cooler heads.

Anatomy of a botched USNWR law ranking leak

For the past few years, USNWR has emailed all law schools deans an embargoed PDF listing the tentative law school rankings about a week before their formal release. And for the past few years, within minutes (and in disregard of that embargo), that email is leaked to a private consulting company, which then posts the rankings on its corporate blog, where the rankings then spread via social media and gossip sites.

This year, USWNR did something different. It released most of its graduate school rankings in an Excel spreadsheet on a password-protected site around 8 am ET on Tuesday, March 5. But it did not release the law school full time rankings, nor the business school rankings. (I guess we know which schools are beholden to these rankings and where USWNR sees its value!) 

Instead, shortly after, individuals at schools received their own school's ranking, and nothing more. This makes leaking much more challenging. If you leak your own school's ranking, it's obvious you leaked it, and USNWR may punish you by not giving you access to that embargoed data early next year. 

But around 5 pm ET on Tuesday, March 5, USNWR sent out a new update. Its Academic Insights database would now have the 2020 rankings data (that is, the rankings data to be released March 12, 2019). 

Academic Insights is a USNWR platform that law schools purchase a license to access and use. It has rankings data stretching back to the beginning. It offers multiple ways to view the data inside AI, or to pull the rankings data out of the database. 

It's user friendly, but it isn't always the easiest to operate, and like many web databases it can suffer from some wonky behavior. It makes leaking a trickier proposition.

Around 7 pm ET March 5, the private consulting company posted the rankings. But the rankings made it very obvious that there were errors, and it also provided clues about how those errors came about.

To leak this information to someone, some law school administrator made a data request from the database and exported the rankings information to a CSV file. The “Leaderboard” AI database is a swift way to see the ranking of law schools compared to one another across categories. (Recall that the database stretches back through the history of USNWR, so it includes all schools that were ever ranked over the last 30 years, whether or not they’re ranked, or even exist, this year.)

The list then included as “N/A” (i.e., “unranked” this year) schools like Arizona Summit and the University of Puerto Rico. This is unsurprising because USNWR doesn’t rank (1) provisionally-accredited schools, (2) schools under probation, and (3) the schools in Puerto Rico.

But the leaked ranking included other bizarre “unranked” choices: Hamline University; Pennsylvania State University (Dickinson) pre-2017; Rutgers, The State University of New Jersey--Camden; Rutgers, The State University of New Jersey--Newark; Widener University; and William Mitchell College of Law (among others). These schools all no longer exist (along with a couple of others that have announced closures). Why list them as “unranked”?

Separately, the leaked rankings omitted information for Penn State - University Park, Penn State - Dickinson, Rutgers, Widener University (Commonwealth), Widener University Delaware, and Mitchell|Hamline. Why aren’t these schools in the database?

These are obviously not random database omissions. They're omissions of schools that split or merged. Their old schools are in the database. But the Leaderboard database pull request omitted those schools. (Why, I don't know.)

But there are ways of requesting school-specific data. You could request the specific institutional data in the AI database for, say, Penn State - University Park or Rutgers, and the data is now available for your review—including those institutions’ ranks. Of course, a few schools might ultimately be "rank not published," or "Tier 2" schools in the rankings. But they're not "unranked."

(Incidentally, from the revealed metadata, we know a lot of information about which person at which law school leaked the rankings, but that’s not what this blog post is about.)

The real botching came when the leaked ranking included these strange inclusions and omissions (with some noticeable gaps—think two schools listed at 64, followed by a school ranked at 67, which means there’s an omission in the 64 ranking) was posted and began to spread. Panicked students and prospective students at places like Penn State and Rutgers asked what happened. The private consulting company replied that it “appeared” the schools were “unranked.” That spawned a great deal of speculation and worry on behalf of these students.

Of course, that wasn’t the case. The statements speculating that these schools appeared to be “unranked” were reckless—that is, they were based without an understanding of how the database operates and based instead on speculation—and false—because, as I noted, each of these omitted schools had a ranking in the database, simply not in the CSV leaked to this private consulting company. (Later statements began to concede that these schools would probably be ranked, but those statements came only after worry and misinformation had spread.)

I pushed back against this false news last week in a couple of social media outlets, because it does no good to perpetuate false rumors about these law schools. These law schools, I insisted, would probably be ranked. They were ranked at the very moment in the AI database; and, barring a change, they’ll be ranked when the rankings were released (i.e., now). (Of course, some schools, like those under probation or those in Puerto Rico, were never going to be ranked.) 

The backlash I received on social media was impressive. I confess, I'm not sure why so many prospective law students felt threatened by my insistence that someone had disclosed bad information about schools like Penn State and Rutgers to them! (Happily, such comments roll off easily.) After that, apparently, USNWR asked for those rankings to be taken down, and they were. (Of course, they still floated around social media and gossip sites.)

But we know that leaking USNWR information from the AI database presents complications for future leaks. Failure to understand how to operate the database may leave an incomplete and inaccurate picture, as occurred this year with the botched leak. We’ll see what USNWR does for the 2021 rankings—are total but accurate leaks better, or incomplete but inaccurate leaks better? We shall see.

And for those relying on leaks in the future? Read skeptically. The leak included material errors this year, and I wouldn't be surprised to see material errors in future leaks.

JD enrollment for 2019 on pace for another modest increase but slight quality decline

For the last year, LSAC has offered really useful tools for tracking the current status of JD applicants and LSAT test-takers. Data is updated daily and visualized cleanly.

We’re at a stage where we’d expect just about 80% of the applicant pool to be filled out, so it should be useful to see where things stand.

Applicants are up 3.1% year-over-year. Applicant increases don’t perfectly equate to increases in matriculants—last year, we saw an 8% applicant spike but a more modest 2.6% matriculant increase. But up is better than down, and we may see another modest increase in JD enrollment in 2019. We’ve seen modest increases the last several years—better for law schools than flat enrollment, but a continued reflection of a “new normal” in legal education.

A more disappointing development is that the quality of applicants has declined. Applicants with a 165 or higher LSAT score are down a few points. The bulk of increase in applicants comes from those scoring a 155-159. But then again, those with <140 LSAT scores are also in decline.

It remains to be seen how many would-be high-LSAT performers opted for the GRE in lieu of the LSAT, which may affect how LSAT scores are distributed among applicants. But it’s another reason to think that any increase in JD enrollment in 2019 will be lower than the increase in the size of the applicant pool—at least, if law schools seek to preserve the quality of their classes.

Statehood, the District of Columbia, and the Twenty-Third Amendment

There’s a renewed effort for statehood for the District of Columbia in the new Democratic-controlled House, and H.R. 51 is the proposal to do so. (DC’s non-voting representative, Eleanor Holmes Norton, has snagged bill #51 as a symbolic gesture in the past, too.)

I’d recently wondered about whether electors of the District of Columbia could cast votes for presidential and vice presidential candidates who resided in the District. As you may know, the Twelfth Amendment requires that electors cast two votes, one for president and one for vice president, “one of whom, at least, shall not be an inhabitant of the same state with themselves .”

DC isn’t a state, but the Twenty-Third Amendment gave DC presidential electors. So, could DC electors vote for an all-DC ticket? No, due to a clever phrase in the Amendment: “they shall be in addition to those appointed by the states, but they shall be considered, for the purposes of the election of President and Vice President, to be electors appointed by a state.” (Emphasis added.) In other words, whatever parts of the Constitution refer to “state” in the context of presidential electors? Those apply to DC’s electors, too.

But back to H.R. 51. The bill, like many bills, excises several blocks from the District when creating a new state. Those blocks are still the seat of government of the United States, and not a part of the state. So a bill like H.R. 51 would create a new state out of the old District, but it would basically split the District into two: new-state-District, and seat-of-government-district. The seat-of-government-district being quite small, essentially residuals made up of several federal buildings and the like.

Of course, the Twenty-Third Amendment comes back into play: “The District constituting the seat of government of the United States shall appoint in such manner as the Congress may direct.” So if DC becomes a state, it gets two Senators, at least one Representative, and at least three presidential electors. But the residual district—remember, just a few federal buildings carved out—is constitutionally entitled to presidential electors. No more than a few dozen people may live in this new seat of government—after all, the residences of DC have been put into the new state.

A little Googling revealed this point has been raised before. That is, before DC can become a state, it ought to be conditioned on a repeal of the Twenty-Third Amendment, lest an anomalous residual set of a few federal buildings is entitled to a slate of presidential electors. That hasn’t been a point of emphasis, of course—getting popular support for statehood for DC is a high hurdle, and the first one that must be surmounted—but it struck me as notable.

Gaming out Hein citation metrics in a USNWR rankings system

Much has been said about the impending USNWR proposal to have a Hein-based citation ranking of law school faculties. I had some of my own thoughts here. But I wanted to focus on one of the most popular critiques, then move to gaming out how citations might look in a rankings system.

Many have noted how Hein might undercount individual faculty citations, either accidentally (e.g., misspellings of names) or intentionally (e.g., exclusion of many peer-reviewed journals from their database), along with intentional exclusion of certain faculty (e.g., excluding a very highly-cited legal research & writing faculty member who is not tenured or tenure-track).

The individual concerns are of concern, but not certainly for the reasons that they would tend to make the USNWR citation metrics less accurate. There are two reasons to be worried—non-random biases and law school administrative reactions—which I’ll get to in a moment.

Suppose USNWR said it was going to do a ranking of all faculty by mean and median height of tenured and tenure-track faculty. “But wait,” you might protest, “my faculty just hired a terrific 5’0” scholar!” or, “we have this terrific 6’4” clinician who isn’t tenure-track!” All true. But, doesn’t every faculty have those problems? If so, then the methodology doesn’t have a particular weakness in measuring all the law schools as a whole against one another. Emphasis on weaknesses related to individuals inside or outside the rankings ought to wash out across schools.

Ah, you say, but I have a different concern—that is, let’s say, our school has a high percentage of female faculty (much higher than the typical law school), and women tend to be shorter than men, so this ranking does skew against us. This is a real bias we should be concerned about.

So let’s focus on this first problem. Suppose your schools has a cohort of clinical or legal research & writing faculty on the tenure track, and they have lesser (if any) writing obligations; many schools do not have such faculty on the tenure track. Now we can identifying a problem in some schools suffering by the methodology of these rankings.

Another—suppose your schools had disproportionate numbers of faculty whose work appears in books or peer-reviewed journals that don’t appear in Hein. There might be good reasons for this. Or, there might not be. That’s another problem. But, again, just because one faculty member has a lot of peer-reviewed publications not in Hein doesn’t mean it’s a bad metric across schools. (Believe me, I feel it! My most-cited piece is a peer-reviewed piece, and I have several placements in peer-reviewed journals.)

Importantly, my colleague Professor Rob Anderson notes one virtue of the Sisk rankings that are not currently present in Hein citation counts: “The key here is ensuring that Hein and US News take into account citations TO interdisciplinary work FROM law reviews, not just citations TO law reviews FROM law reviews as it appears they might do. That would be too narrow. Sisk currently captures these interdisciplinary citations FROM law reviews, and it is important for Hein to do the same. The same applies to books.”

We simply don’t know (yet) whether these institutional biases exist or how they’ll play out. But I have a few preliminary graphics on this front.

It’s not clear how Hein will measure things. Sisk-Leiter measures things using a formula of 2*mean citations plus median citations. The USNWR metric may use mean citations plus median citations, plus publications. Who knows at this rate. (We’ll know in a few weeks!)

In the future, if (big if!) USNWR chooses to incorporate Hein citations into the rankings, it would likely do so by diminishing the value of the peer review score, which currently sits at a hefty 25% of the ranking and has been extraordinarily sticky. So it may be valuable to consider how citations relate to the peer score and what material differences we might observe.

Understanding that Sisk-Leiter is an approximation for Hein at this point, we can show the relationship between the top 70 or so schools in the Sisk-Leiter score (and a few schools we have estimates for at the bottom of the range), and the relationship of those schools to their peer scores.

This is a remarkably incomplete portrait for a few reasons, not the lease of which the trendline would change once we add 130 schools with scores lower than about 210 to the matrix. But very roughly we can see that the trends roughly correlate between peer score and Sisk-Leiter score, with a few outliers—those outperforming peer score via Sisk-Leiter above the line, those underperforming below the line.

But this is also an incomplete portrait for another reason. USNWR scales standardizes each score, which means they place the scores in relationship with one another before totalling them. That’s how they can add a figure like $80,000 of direct expenditures per student with a incoming class median LSAT score of 165. Done this way, we can see just how much impact changes (either natural improvement or attempts to manipulate the rankings) can have. This is emphatically the most important way to think about the change. Law school deans that see that citations are a part of the rankings and reorient themselves accordingly may well be chasing after the wind if costly school-specific changes have, at best, a marginal relationship to improving one’s overall USNWR score.

UPDATE: A careful and helpful reader pointed out that USNWR standardizes each score, but "rescales" only at the end. So the analysis below is simplified to take the standardized z-scores and rescaling them myself. This still allows us to make relative comparisons to each other, but it isn't the most precise way of thinking about the numerical impact at the end of the day. It makes it more readable--but less precise. Forgive me for my oversimplification and conflation!

Let’s take a look at how scaling currently works with the USNWR peer scores.

I took the peer review scores from the rankings released in March 2018 and scaled them on a 0-100 scale—the top score (4.8) became 100, and the bottom score (1.1) became 0.

As you can see, the scaling spreads out the schools a bit at the top, and it starts to compress them fairly significantly as we move down the list. I did a visualization of the distribution not long ago, but here you can see that an improvement in your peer score of 0.1 can nudge you up, but it won’t make up a lot of ground. That said, the schools are pretty well spread apart, and if you grinded it out, you could make some headway by climbing if you improved your peer score by 0.5 points—a nearly impossible feat. Coupled with the fact that this factor is a whopping 25% of the rankings, it offers opportunity if someone could figure out how to move. (Most schools just don’t move. The ones that do, tend to do so because of name changes.)

Now let’s compare that to a scaling of the Sisk-Leiter scores. I had to estimate the bottom 120 or so scores, distributing them down to a low end Sisk-Leiter score of 75. There’s a lot of guesswork, so this is extremely rough. (The schools around 70 or so have a Sisk-Leiter score of around 210.)

Yale’s 1474 becomes 100; the 75 I created for the bottom becomes 0. Note what happens here. Yale’s extraordinarily high citation count stretches out the field. That means lots of schools are crammed together near the bottom—consider Michigan (35), Northwestern (34), and Virginia (32). Two-thirds of schools are down in the bottom 10% of the scoring.

If you’re looking to gain ground in the USNWR rankings, and assuming the Hein citation rankings look anything like the Sisk-Leiter citation rankings, “gaming” the citation rankings is a terrible way of doing it. A score of about 215 puts you around 11; a score of around 350 puts you at 20. That’s the kind of dramatic movement of a 0.5 peer score improvement.

But let’s look at that 215 again. That’s about a 70 median citation count. Sisk-Leiter is over three years. So on a faculty of about 30, that’s about 700 faculty-wide citations per year. To get to 350, you’d need about a 90 median citation count, or increase to around 900 faculty-wide citations per year. Schools won’t be making marked improvements with the kinds of “gaming” strategies I outlined in an earlier post. They may do so through structural changes—hiring well-cited laterals and the like. But I am skeptical that any modest changes or gaming would have any impact.

There will undoubtedly be some modest advantages for schools that dramatically outperform their peer scores, and some modest injury for a few schools that dramatically underperform. But for most schools, the effect will be marginal at best.

That’s not do say that some schools will react inappropriately or with the wrong incentives to a new structure in the event that Hein citations are ultimately incorporated in the rankings.

But one more perspective. Let’s plug these Sisk-Leiter models into a USNWR model. Let’s suppose instead of peer scores being 25% of the rankings, peer scores become just 15% of the rankings and “citation rankings” become 10% of the rankings.

UPDATE: I have modified some of the figures in light of choosing to use the z-scores instead of adding the scaled components to each other.

When we do this, just about every school loses points relative to the peer score model—recall in the chart above, a lot of schools are bunched up in the 60-100 band of peer scores, but almost none are in the 60-100 band for Sisk-Leiter. Yale pushes all the schools downward in the citation rankings.

So, in a model of 15% peer score/10% citation rankings among the top 70 or so Sisk-Leiter schools, the typical school drops about 13 scaled points. That’s not important, however, for most of them—recall, they’re being compared to one another, so if most drop 17 points, then most should remain unchanged. And again, dropping 17 points is only 25% of the rankings—it’s really about a 4-point change in the overall rankings.

I then looked at the model to see which schools dropped 24 or more scaled points in this band (i.e., a material drop in score given that this only accounts for about 25% of the ranking): Boston College, Georgetown, Iowa, Michigan, Northwestern, Texas, Virginia, and Wisconsin. (More schools would likely fit this model once we had all 200 schools’ citation rankings. The University of Washington, for instance, would also probably be in this set.) But recall that for some of these schools—like Michigan and Northwestern—are already much higher than many other schools, so even dropping like this here would have little impact on the ordinal rankings of law school.

I then looked at the model to see which schools dropped 10 points or fewer (or gained!) (i.e., a material improvement in score): Chicago, Drexel, Florida International, George Mason, Harvard, Hofstra, Irvine, San Francisco, St. Thomas (MN), Toledo, and Yale. Recall again, Yale cannot improve beyond #1, and Harvard and Chicago are, again, so high in the rankings that marginal relative improvements in this area are likely not going to affect the ordinal rankings.

And all this means is that for the vast majority of schools, we’ll see little change—perhaps some randomness in rounding or year-to-year variations, but I don’t project for most schools much change at all.

Someone with more sophistication than I could then try to game out how these fit into the overall rankings. But that’s enough gaming for now. We’ll wait to see how the USNWR Hein citation figures come out this year, then we might play with the figures to see how they might affect the rankings.

(Note: to emphasize once again, I just use Sisk-Leiter. Hein will include, among other things, different citations, it may weigh differently thank Sisk-Leiter, it uses a different window, it may use different faculty, and the USNWR citation rankings may well include publications in addition to citations.)