Significant one-year peer USNWR survey score drops, their apparent causes, and their longevity

The peer score from USNWR’s annual law school rankings consists of the results of a survey it sends out to around 800 voters. Those voters are the dean, the associate dean for academics, the chair of the hiring committee, and the most recently tenured faculty member at each law school. Response rates tend to be fairly high, usually around 70%. Voters are asked to evaluate schools on a scale of 1 (marginal) to 5 (outstanding), or N/A if a voter doesn’t have enough information. Those results are averaged into each school’s “peer score.”

These results have been remarkably stagnant for decades for most schools.

Of course, I can only guess as to why there were these drops, but, for most schools, we have pretty good contemporaneous evidence of (negative) newsworthy events that likely prompted the drop.

(Please note, I use the year the ranking is published. USNWR calls the rankings published in 2019 as the “2020 rankings,” but I use the date 2019 instead. The survey is sent out in the fall of the year before, so a survey for 2019 is sent out around November 1, 2018.)

Rutgers-Camden, 2002, 2.8 to 2.5. This may be the only truly fluctuation due (mis)fortune or chance. In the three previous surveys, Rutgers-Camden had a 2.7, 2.6, and 2.6 score. In 2001, it rose to 2.8. In 2002, it dropped to 2.5, where it remained in the 2.5 to 2.6 range for the next decade, settling later at 2.4.

There’s no particular scandal or controversy that arose. Instead, the 2.8 just might’ve been the fortune of one year, and the following 2.5 the misfortune of another. (Rutgers-Camden later merged with Rutgers-Newark.)

Loyola Law School, 2009, 2.6 to 2.3. By far the most inexplicable drop turned out to be attributable to a USNWR error. Loyola had long held a 2.5 to 2.6 peer score in the decade before 2009. But in 2009, its peer score abruptly plummeted 0.3 to 2.3. The reason? USNWR renamed Loyola as “Loyola Marymount University” in the poll. While long affiliated with LMU, the law school’s brand had developed around a different name, which suddenly changed for one year.

The following year, Loyola’s name returned “Loyola Law School,” its peer score rebounded to 2.6, and it’s remained around there ever since. (It’s also the only time a school has risen 0.3, or higher, in a single year in the entire history of USNWR’s peer surveys.)

Illinois, 2012, 3.5 to 3.1. Illinois consistently held a peer score for 3.4 to 3.6 for a decade. In 2011, a story broke that an admissions dean single-handedly inflated median LSAT scores at Illinois in six of the previous 10 years. Illinois was fined $250,000 and censured. In the 2012 rankings, Illinois’s peer score plunged from 3.5 to 3.1.

The Illinois drop was significant because of how high Illinois used to be. And it’s significant because it makes it that much harder to climb back. Illinois rose to a 3.3 one year but hasn’t gotten past that, at 3.2 in the most recent survey. The residual impact from an event a decade ago remains (in my view, an unjustifiable result).

Villanova, 2012, 2.6 to 2.2. For a decade, Villanova’s scores hovered between 2.5 and 2.7. But in a different scandal in 2011, the news broke that Villanova “knowingly” reported inaccurate LSAT & UGPA data. It was censured by the ABA.

Villanova has mostly recovered, steadily rising back to a 2.5, but it has yet to return to 2.6. Like Illinois, the impact in the peer score has far outlasted any formal ABA sanction.

St. Louis University, 2013, 2.4 to 2.0. One of the more notorious drops in peer score arose after a series of controversies—the law school dean resigned in protest in August 2012, with noted disputes about university leadership prominent that fall. It’s one of just 3 times that a school has dropped 0.4 in the peer score, assuredly in part because the news remained fresh close in time to circulation of the survey.

St. Louis has never returned to a 2.4, but it has slowly improved since the drop and has stood at a 2.3 for the last few surveys.

Albany, 2015, 2.0 to 1.7. For years, Albany had held a 2.1 or 2.2 peer score. In 2013, that score settled to a 2.0 and remained there in 2014. That isn’t remarkable, because [scores lower]. But in 2015, the score dropped 0.3 to 1.7. In early 2014, the school made headlines for buyout proposals amidst financial exigency and faculty backlash. These were some of the first public signs of financial strain at U.S. law schools after the economic downturn—recall that enrollment jumped for the Class of 2012 dropped ever since. While many schools felt financial strains, few made it public—today, of course, many more have had their financial struggles made public.

The impact didn’t last long. By 2016 the school returned to a 1.9, and in 2017 a 2.0 again, which is its score this year, too.

Vermont Law School, 2.2 to 1.9, 2019. The most recent drop took place in the most recent rankings. In the summer of 2018, Vermont announced that 14 of its 19 tenured professors would lose tenure, an announcement just a few months before ballots went out. Time will tell what happens next year, but we should expect a small bounce back up.


This post isn’t really to shame any particular school or approve of how the peer rankings have reacted to scandals. It’s simply to note that some strong reactions do exist.

It also highlights the stickiness of the rankings. The cohort of voters can change fairly frequently. Voters include the dean, the associate dean of academics, the chair of faculty appointments, and the most recently tenured faculty member. Those positions change with some frequency—the typical dean’s tenure is 3 years, new faculty hires mean a steady stream of tenure grants, different appointment chairs as service commitments rotate, and so on. Nevertheless, the peer score remains tough to move. Smaller controversies, a USNWR mistake, or apparent randomness appear to have little staying power. But bigger scandals have prevented scores from ever returning to where they were before the scandal—even if the school has faced appropriate sanction and all the people involved have moved on. Whether it’s inertia or long punitive (and vindictive?) memories, the peer scores can remain depressed.

Importantly, I hope some law professors might reconsider why they may be voting the way they are. Are they voting because of the present state of the law school—its student body quality, its student outcomes, its faculty quality, its administrators, etc.—or because of some past act of the law school? By reflecting on why voters vote the way they do, we may see less (arguably) punitive voting.