Could we improve the law faculty hiring process through blind reads of scholarship?

On the heels of an idea floated by my colleague Professor Rob Anderson… “Why don’t committees do a blind read of everyone’s stuff before looking at credentials? Yeah it takes resources, but you are making a multi-million $ investment for the next 40 years.”

I’ve thought about this over the last several days and wanted to offer a couple of ideas. These are very much working ideas, so feel free to critique!

Here’s the specific problem I’m trying to solve (and I speak generally, so it may not apply to your particular school, committee, or search!). Too often, at the Association of American Law Schools (AALS) Faculty Recruiting Conference (FCR), faculty hiring decisions at law schools are made by shortcuts. We sift through hundreds of applications between August and October, looking at certain cues like law school attended, academic honors, visiting assistant professor position history, elite clerkships, or other CV items that look like prestige and quality. This often leads to a fairly narrow set of candidates who meet the criteria, and it tends to be those from elite socioeconomic backgrounds. It often demands geographic transience and flexibility from applicants, and it can produce inequalities among candidates that could run along race, sex, or class lines.

But even though schools are looking at these proxies, that’s not what law schools actually desire in candidates. They’re looking for faculty who can write, who can produce and engage in good scholarship. They’re looking for good teachers, who can communicate complicated concepts with clarity to law students. These can be challenges to identify early in one’s career, but that’s what schools are looking to identify.

Many candidates—and perhaps most successful candidates—already have something in the way of scholarship, at least one publication, or even a good draft. After screening interviews in October, that’s what would be used as the “job talk” between October and February. Some schools request that paper between August and October, ahead of the screening interview. And now some applicants can upload that paper with their initial application.

But law schools still primarily filter and screen with these cues or shortcuts first. Reading the scholarship comes later. On top of that, the scholarship is now filtered through the bias of those cues, where readers are inclined to think the work is going to be good because, well, they made it through that filter! And if the article has placed in a sufficiently “prestigious” journal (perhaps even the VAP’s institution’s home journal…) the bias in the reader is increased even more.

So, why not have an opportunity to read scholarship without any of those cues? No resume, no pedigree, no placement information if the article is placed. Just a blind read of articles.

This could take one of two forms.

First, there could be a database, say in July, where prospective academics could upload the article. It would be stripped of their name and whether it was published. Perhaps it would include a few general scholarly fields if schools wanted to winnow their searches. Law school hiring committees could then pull the articles from the database and review them internally. The database would disclose the identity of the author upon request, but not within 14 days of when a law school requested the article (essentially, a forced cool-down period). Schools, now armed with internal blind reviews of scholarship, could identify these candidates in the AALS pool after the FAR distribution occurs.

Second, some volunteer law professors in various fields could offer to do blind review of such articles submitted over the summer. Law professors would assign the article a grade. Articles that “passed,” or that received some sufficiently high enough grade (e.g., 3.5 out of 5 stars) would be publicly identified, with the author. Those with too low a score wouldn’t be identified, which might also mean you simply didn’t submit an article for review.

The second, in some ways, would be something like an NFL “draft grade” for prospective football players leaving college early, where independent evaluators assess their talent and tell them where they’d be projected to go.

The first has the benefit of giving committees the full control over their review processes and needs no recruiting of others to help. The second has the benefit of handling the volume of articles, if many choose to take advantage of the system, and a greater “peer” sentiment.

Of course, these are costly decisions—maybe it’s really all the case that the filtering cues are good at identifying those who go on to be good scholars. Or that it’s not such much it identifies good scholars are provides a first rough cut, and the material review of scholarship can happen after that first cut.

Really, then, the benefit would be for “diamonds in the rough,” those scholars who lack the pedigree but who may have some real promise as an academic.

It might be that blind scholarship review, if it’s gone through a pretty rigorous polishing through mentors in a VAP or PhD programs, also doesn’t help us a whole lot.

And it might also be the case that it places more pressure on writing something early. But, to be frank, it may be that this ship has sailed, and we really do expect people to have started their writing careers before entering tenure-track academia.

But, here are a couple of potential models. Better than the status quo? One preferable to another? Improvements to be made? A third way?

Experimentation in reforming legal education

Professor Dan Rodriguez has a terrific and helpful post over at Legal Evolution, Toward evidence-based legal education reform: First, let’s experiment. This comes on the heels of his call for more data to help improve law school decisionmaking.

“Data-driven” is one of the trendiest buzzwords around at the moment, but he points out that we too easily assume the status quo is the most effective form of legal education or that we can’t figure out if it or another form is any good. We need evidence—data, yes, but really means of comparing different kinds of legal education and ascertaining whether one is better than another. I think Professor Rodriguez rightly notes that “internal political difficulty” tends to inhibit experimentation in legal education to a greater degree than accreditation bodies or rankings factors—that is, there’s plenty of flexibility within existing accreditation frameworks that minimally impact USNWR rankings factors, but it’s simply a question of will, desire, priorities, and the like.

One is that this language sounds so scientific, and it may lead to concerns about institutional review board reviews and the like. But as an exchange on Twitter recently illuminated, labeling them “pilot programs” over experimentation or other overly scientific-sounding phrases may help ease some political concerns.

Additionally, I think it’s worth emphasizing that a lot of what we subjectively believe to be “uniform” is not very uniform at all, which opens up opportunities to treating similarly-students differently within appropriate boundaries. There might be concerns about “experimenting” with a 1L section in the legal curriculum, but, really, 1L professors might have vastly different approaches to a theoretically identical subject, including different exam and grading methodology. Willingness to try “pilot programs” among subsets of law students should extend beyond professors’ academic freedom in the classroom, an acknowledged differentiator among similarly-situated sets of students.

Importantly, Professor Rodriguez highlights the randomized nature of such programs. That’s also essential. Many students opt to take certain things, like bar prep classes, clinics, or externships. That self-selection means that we may lose the ability to identify any independent value those programs may have once bias clouds the results—for instance, self-motivated students may opt for a bar prep class over a fellow student with similar grades who lacks the motivation, and it may tell us less (if anything) if the first student passes the bar but the second doesn’t.

In short, it requires political will and time from invested professors to make some of the changes Professor Rodriguez identifies. Unfortunately, it appears little of significant has happened in legal education, even in the face of dropping bar exam pass rates, in recent years. Some schools and some isolated programs may be doing some things, but even those haven’t been deemed so wildly successful that other schools are racing to replicate them. Let’s hope there’s more movement in the years ahead.

Recent Supreme Court clerk placement into the legal academy

On the heels of my recent annual survey of where Supreme Court clerks end up 10 years after their clerkships, I thought I’d look at the data a different way. I’ve done this survey for seven years and have a good chunk of placement data for Supreme Court clerks. I thought I’d look at the 56 clerks who ended up as law professors 10 years after their clerkships, and where they’d landed in that time. Of course, clerks may have moved on to other schools after 10 years, some may have left the academy by 10 years, or others may enter the academy after 10 years. But looking at the same 10-year window of similarly-situated clerks across several years was of interest (ed.: or more likely serves as a Rorschach to confirm priors…).

I’ve sorted below by justice and then by school, among those who clerked OT 2003 to OT 2009, and where they were 10 years out.

Ginsburg (11): Yale, Harvard, Chicago (x2), Duke, Michigan (x2), Berkeley, Fordham, Wisconsin (x2)*

Stevens (11): Columbia (x3), Michigan, Penn, Duke, Wisconsin, Florida, Cardozo, Georgia State, American

Souter (10): Harvard, NYU (x2), Columbia, Virginia, Michigan, Northwestern, UCLA, William & Mary, Pepperdine

Kennedy (7): Harvard, Washington University in St. Louis, George Washington, Notre Dame (x2), Ohio State, Hastings

Breyer (4): Harvard, Chicago, Columbia (x2)

O’Connor (4): Yale, Chicago, Emory,** BYU

Roberts (3): Chicago, Duke, Missouri

Scalia (3): Columbia, Virginia, Richmond

Sotomayor (2): Georgetown, Wisconsin*

Thomas (2): Notre Dame, George Mason

Alito (1): Emory**

*Clerked for both Ginsburg and Sotomayor in different terms

**Clerked for O’Connor and later Alito in the same term

Columbia (7): Stevens (x3), Breyer (x2), Scalia, Souter

Chicago (5): Ginsburg (x2), Breyer, O’Connor, Roberts

Harvard (4): Breyer, Ginsburg, Kennedy, Souter

Michigan (4): Ginsburg (x2), Souter, Stevens

Duke (3): Ginsburg, Roberts, Stevens

Notre Dame (3): Kennedy (x2), Thomas

Wisconsin (3): Ginsburg, Ginsburg/Sotomayor, Stevens

NYU (2): Souter (x2)

Virginia (2): Scalia, Souter

Yale (2): Ginsburg, O’Connor

American (1): Stevens

Berkeley (1): Ginsburg

BYU (1): O’Connor

Cardozo (1): Stevens

Emory (1): O’Connor/Alito

Florida (1): Stevens

Fordham (1): Ginsburg

George Mason (1): Thomas

George Washington (1): Kennedy

Georgetown (1): Sotomayor

Georgia State (1): Stevens

Hastings (1): Kennedy

Missouri (1): Roberts

Northwestern (1): Souter

Ohio State (1): Kennedy

Penn (1): Stevens

Pepperdine (1): Souter

Richmond (1): Scalia

UCLA (1): UCLA

Washington University in St. Louis (1): Kennedy

William & Mary (1): Souter

A few thoughts on open-source or free legal casebooks

Professor Brian Frye is a tireless advocate (among others) for open-source legal casebooks. Casebooks are costly for students—even rented or used casebooks can run students into the thousands of dollars over three years.

To this day, I feel ashamed to say I haven’t taken advantage of open-source casebooks or developed my own materials. I thought about some of the barriers to entry.

Obviously, the decision to assign a casebook shifts the costs to students, and the costs of work or developing my own materials to others. In my earliest years, I thought I was too busy figuring out the materials to be using my own, and I just wasn’t satisfied with the open source casebooks I saw. Later, I was switching to new preps, and it was too much work again. In still other areas, there haven’t been open source casebooks.

A few sole authored casebooks I really like—Professor George Fisher’s Evidence and Professor Gary Lawson’s Administrative Law come to mind. It would be a cost to give them up.

But this spring, in an election law seminar, I hope to develop my own materials—and I hope to continue to use and modify them for election law courses (seminar and survey) for years to come. It’s simply getting over that hurdle of doing it the first time.

But I thought about a few other barriers to free legal materials for students, or underexamined costs.

It’s interesting to me that law schools don’t recognize the student loan aspect of casebooks. Students can take out loans based upon the estimated cost of attendance. Many simply take whatever is the maximum figure estimated by the school. And that, of course, can be an onerous cost for them down the road. Even if the casebooks may not cost $1000 per year, if that’s the estimated figure, students may well simply take out that amount—which they may use on casebooks, or on other personal expenditures, then have to pay back after graduation. So $3000 in casebook loans becomes and extra ~$35/mo for 10 years (with more than $1000 in compounded interest!). Maybe it’s not a lot, but it’s a good monthly chunk.

And worse for law schools, it’s $3000 in loans that isn’t even revenue to the law school! Much like loans for off-campus living expenses or travel to study abroad locations, the loans don’t even benefit the institution! Schools ought to think critically about such loans in particular, because the costs are often attributable to the law school—“Oh these are my law school loans,” even if the law school didn’t get all the revenue from them.

Another curiosity is the notion that law professors, particularly in first-year or required or “big” classes, have a lone wolf mentality for curriculum. In most universities that I’m aware of, undergraduate curriculum is decided by the department, including what books to use. Sometimes the university develops a reader, at other times they agree to a consensus anthology or workbook or whatnot. But the department usually settles on a curriculum, and everyone adopts (sometimes begrudgingly).

If law schools did this, it would open up significant opportunities to reduce the work associated with developing materials, too. For instance, the Contracts professors could agree to create a batch of case materials together, and supplement with their own personal preferences of additional cases to add. But if it occurs collectively, it cuts the costs for the entire class regardless of the professor, it distributes the work across the faculty, and it’s easily used and adapted from different professor or for different years in the future. (In many of the first-year “common law” classes, with relatively little “updates” to the material, this is particularly beneficial.) It might be that the law school has to incentivize the faculty with some kind of modest stipend the first time they do this. But it would pale in comparison to the long-term costs to students for casebooks.

Of course, one could bypass all this by adopting an open-source casebook. But I’m thinking creatively, particularly for those courses where (1) the faculty cannot agree on one of those open-source casebooks or (2) the course simply lacks such a casebook.

In any case, I hope to move toward free alternatives in the years ahead—my own excuses have held me back over these years. But institutionally, I believe law schools can be more cognizant of the costs shifted to students and how some school-centered decisionmaking can improve the situation all around.

MBE scores improve for July 2019 exam

The National Conference of Bar Examiners recently shared that mean Mulistate Bar Exam scores have improved for the July 2019 test—and the biggest improvement in over a decade.

Coming off a 34-year-low in MBE scores, the news is welcome. First-time test-takers were roughly the same over the July 2018 and July 2019 exams, which is also good news—it likely means first-time pass rates will improve, at least modestly, in most jurisdictions. Still, the median 141.5 score is a far cry from the July 2018 administration, which had a 145.6 score. Schools continue to see higher-than-hoped failure rates because admissions and retention practices still haven’t kept pace with changes in student quality. Whether this July is a turning point for future years remains to be seen—pessimistically, the turn in July 2017 scores didn’t portend much good news for July 2018, but time will tell.

Is your law school being ranked by an SEO clickfarm? Click here to find out!

There are a lot of law school rankings out there. And law schools are often desperate to validate themselves by some—really, too often, by any—metric that purports at having any objectivity. Rankings are numerical, and often that’s enough to suggest that there’s something objective happening.

So when Above the Law picked up a ranking from “Online Paralegal Programs”—which became its second most-read story of the week (and a click-through story, ‘natch)—any sensible person would ask, “What?” That is, the site online-paralegal-programs.com (which based on its URL alone should fail any scrutiny) offers what, exactly?

Take a visit to the site, and you’ll see all the stock images and hot links of a typical SEO clickfarm. There are dozens of articles written in relatively poor English with a stream of conscious quality about them—and with a lot of self-referential links along with random infographics. There are rarely authors identified with any of the stories. Each page opens with a “sponsored” set of paralegal schools for you to sift through.

The “About Us” page also doesn’t pass the Turing test:

Online Paralegal Programs was formed by a small group of friends when we realized there was a shortage of truthful, unaffiliated, information available to students online. We want to help students get informed so they can make the best decision regarding their education and employment. Our goal is to provide useful, well researched rankings and resources to those interested in going to work in the paralegal field.

Our main editor is Oliver Plante. Oliver has been interested in the paralegal field his whole life. He was frustrated at the lack of information available to those seeking to become educated when he was looking to become educated himself. It was his frustration that caused us to start this site.

I confess, I find it unlikely that groups of friends have a multi-year passion for online paralegal rankings websites. The only name on the site, Oliver Plante, sounds suspiciously like a caller to Moe’s Tavern—and which, unless he’s a tech consultant in Toronto, may well be fictitious.

Of course, law school social media accounts eagerly promoted being ranked by this SEO clickfarm. Because, truth be told, for a lot of law schools, the source doesn’t matter.

The methodology is entirely subjective? Still promote.

The rankings come from 2014? (Scroll to the bottom to see the one comment.) Still promote.

A story by author “Oscar Jenkins,” with no bio or ability to contact? Still promote.

We live in an era where law schools constantly fret about misinformation and “fake news,” and lament how they feel beholden to meaningless rankings. But the rapid spread of “rankings” like these only does harm to both claims. Modest scrutiny should precede any such sharing.

Do state bar licensing authorities distrust law schools?

It’s late July, so it’s time for another round of op-eds and blog posts about the bar exam—it doesn’t test the things that are required of legal practice, the cut score is unjustifiably high, it’s a costly and burdensome process for law students, etc.

Granted, these arguments have may varying degrees of truth, but, as any reader of this blog is no doubt familiar, I am pretty skeptical of these claims—and I say that as one who, as a law professor, in my own self-interest, would subjectively like to see an easier bar exam for law school graduates. But graduates have had persistently low scores for coming up on half a decade, mostly attributable to the decline in admissions practices at many law schools. And I think we too quickly conflate a lot of arguments about the bar exam.

But I’ve long had an uncomfortable thought about the bar exam as I’ve read the claims of legal educators (often law school deans) over the last several years. Law schools complain that their students have invested three years of their lives, plus tuition, plus the effort to pass the bar exam, and many fail—only, of course, to retake at still more invested time and cost before ultimately passing (or maybe never passing). Isn’t it unfair to these graduates?

Maybe, of course, depending on the “right” cut score in a jurisdiction. But… what about the opposite perspective? That is, are law schools graduating students who are not qualified to engage in the practice of law?

That’s a very cold question to ask. The ABA’s (new, slightly higher) standard for accrediting law schools is that at least 75% of its graduates should pass the bar exam within two years—it’s long had an outcome-oriented element to accrediting law schools. So the ABA admits that law schools can graduate a significant cohort who are never able to pass the bar.

Now, getting 100% first-time bar passage rate is pretty challenging—there are usually at least a couple of students at even the most elite law schools in even the biggest boom-times of legal education who’d fail the bar exam on the first attempt, for lack of effort or personal circumstances even if not for lack of ability.

But nevertheless, why do state bar licensing authorities—which also have a role in the accreditation of schools in the state (even if they mostly outsource it to the ABA)—require graduates of in-state law schools to take the bar exam? Does it reflect a distrust of those in-state law schools?

There’s only one state now with “diploma privilege,” Wisconsin. That is, graduates of law schools at the University of Wisconsin or Marquette University are automatically admitted to the bar. Many more states had diploma privilege several decades ago, but those have gradually been replaced until just Wisconsin remains.

Some complain about Wisconsin’s diploma privilege in the vein of, “Does it seem like Wisconsin’s law schools are really teaching sufficiently Wisconsin-centric law to preclude the need to take the bar exam?” But I think that mistakes what may be a driving force in these discussions (and the barrier that’s happened in jurisdictions considering reinstating diploma privilege).

In short, the bar exam is essentially a licensing authority’s way of verifying that the law schools are graduating qualified practitioners of law. Yes, the bar exam may be an imperfect way of doing it. But given that the bar exam highly correlates with law school grade point average, one can’t say it’s particularly irrelevant (unless law professors make the same claim about law school grades!).

Now imagine you’re the bar licensing authority in Wisconsin. You look at what’s happening at Wisconsin and at Marquette. And you’re satisfied—these two schools admit a good batch of students each year; their academic dismissal and transfer acceptance rules are sound; they graduate qualified students each year. Yes, maybe a few would fail the bar exam in Wisconsin each year—but we know there can be some randomness, or some cost of retaking for candidates who’ll ultimately pass, and the like. But the licensing authority trusts the law schools in the state. The law schools are consistently graduating students who, on the whole, are capable of practicing law in the state.

That’s a really good relationship between the state bar licensing authority and the law schools in the state, no?

So… what does that tell us about the other 49 states and the District of Columbia? (Although Alaska doesn’t have a law school….)

It may tell us that state bar licensing authorities do not have the same faith in these in-state law schools. That is, they believe law schools are not consistently graduating students capable of practicing law in the state. And that’s a cold truth for law schools to consider.

Of course, state bar licensing authorities may also have idiosyncratic reasons for preserving the bar exam (e.g., “We took the bar, so kids these days have to take the bar!”). And it might also be the case that many law schools or bar licensing authorities haven’t seriously considered trying to reinstate diploma privilege.

But I wonder about three persuasive reasons—which should cover the ideological spectrum!—for law schools in a few jurisdictions to consider pressing for diploma privilege. I look at the upper Midwest, the Great Plains, and northern New England in particular.

First, it encourages greater diversity in the legal profession. These arguments are consistently raised in California among other places—law schools are simply more diverse than the legal profession as a whole (due largely in recent years to changes in demographics), and reducing a barrier to the bar would immediately lift the diversity of the legal profession. (It would also encourage increased residence in state of those graduates, as the third point below indicates.)

Second, it reduces state regulatory occupational licensing authority burdens. We’ve seen a small revolution in states from Arizona to Pennsylvania to try to reduce the amount of occupational licensing burdens, from reducing the kinds of positions that need licensing to allowing interstate recognition of occupational licenses. Allowing a reduction in the burdens of occupational licensing would be consistent with that trend—even if it’s of a long-regulated profession like law.

Third, in these jurisdictions I named, states can offer a competitive advantage against other states where demographics favor more rapid population growth. Declining birth rates, aging populations, migration patterns, whatever it may be—there is simply less growth in the upper Midwest, Great Plains, and northern New England than other areas of the country. By offering in-state graduates the guarantee of bar admission, there is a greater incentive for these younger attorneys to stay in the state and practice locally rather than migrate elsewhere.

I also mention these jurisdictions because many have just one or two law schools, similar to Wisconsin, and therefore relatively easy for the schools to act together (or as one institution!) to meet the standards that would satisfy the state bar licensing authority.

The tradeoff for law schools? All the law schools in the state have to admit and graduate students who consistently appear able to pass the bar exam and practice law—a particularly high first-time pass rate and a near-100% ultimate pass rate.

As law schools for a few years have reduced admissions standards to preserve revenue, this is a particularly challenging prospect. State bar licensing authorities often appear increasingly distrustful of law school behavior, just as law schools often appear increasingly distrustful of state bar licensing authority behavior.

But developing a local community of trust between the state bar and in-state law schools could redound to significant benefits for all parties in short order. Whether that claim can be made persuasively, and whether law schools could alter their behavior in the short term for a potential long-term improvement of both their graduates’ positions and their state bar’s position, remains to be seen.

Significant one-year peer USNWR survey score drops, their apparent causes, and their longevity

The peer score from USNWR’s annual law school rankings consists of the results of a survey it sends out to around 800 voters. Those voters are the dean, the associate dean for academics, the chair of the hiring committee, and the most recently tenured faculty member at each law school. Response rates tend to be fairly high, usually around 70%. Voters are asked to evaluate schools on a scale of 1 (marginal) to 5 (outstanding), or N/A if a voter doesn’t have enough information. Those results are averaged into each school’s “peer score.”

These results have been remarkably stagnant for decades for most schools. [###]

Of course, I can only guess as to why there were these drops, but, for most schools, we have pretty good contemporaneous evidence of (negative) newsworthy events that likely prompted the drop.

(Please note, I use the year the ranking is published. USNWR calls the rankings published in 2019 as the “2020 rankings,” but I use the date 2019 instead. The survey is sent out in the fall of the year before, so a survey for 2019 is sent out around November 1, 2018.)

Rutgers-Camden, 2002, 2.8 to 2.5. This may be the only truly fluctuation due (mis)fortune or chance. In the three previous surveys, Rutgers-Camden had a 2.7, 2.6, and 2.6 score. In 2001, it rose to 2.8. In 2002, it dropped to 2.5, where it remained in the 2.5 to 2.6 range for the next decade, settling later at 2.4.

There’s no particular scandal or controversy that arose. Instead, the 2.8 just might’ve been the fortune of one year, and the following 2.5 the misfortune of another. (Rutgers-Camden later merged with Rutgers-Newark.)

Loyola Law School, 2009, 2.6 to 2.3. By far the most inexplicable drop turned out to be attributable to a USNWR error. Loyola had long held a 2.5 to 2.6 peer score in the decade before 2009. But in 2009, its peer score abruptly plummeted 0.3 to 2.3. The reason? USNWR renamed Loyola as “Loyola Marymount University” in the poll. While long affiliated with LMU, the law school’s brand had developed around a different name, which suddenly changed for one year.

The following year, Loyola’s name returned “Loyola Law School,” its peer score rebounded to 2.6, and it’s remained around there ever since. (It’s also the only time a school has risen 0.3, or higher, in a single year in the entire history of USNWR’s peer surveys.)

Illinois, 2012, 3.5 to 3.1. Illinois consistently held a peer score for 3.4 to 3.6 for a decade. In 2011, a story broke that an admissions dean single-handedly inflated median LSAT scores at Illinois in six of the previous 10 years. Illinois was fined $250,000 and censured. In the 2012 rankings, Illinois’s peer score plunged from 3.5 to 3.1.

The Illinois drop was significant because of how high Illinois used to be. And it’s significant because it makes it that much harder to climb back. Illinois rose to a 3.3 one year but hasn’t gotten past that, at 3.2 in the most recent survey. The residual impact from an event a decade ago remains (in my view, an unjustifiable result).

Villanova, 2012, 2.6 to 2.2. For a decade, Villanova’s scores hovered between 2.5 and 2.7. But in a different scandal in 2011, the news broke that Villanova “knowingly” reported inaccurate LSAT & UGPA data. It was censured by the ABA.

Villanova has mostly recovered, steadily rising back to a 2.5, but it has yet to return to 2.6. Like Illinois, the impact in the peer score has far outlasted any formal ABA sanction.

St. Louis University, 2013, 2.4 to 2.0. One of the more notorious drops in peer score arose after a series of controversies—the law school dean resigned in protest in August 2012, with noted disputes about university leadership prominent that fall. It’s one of just 3 times that a school has dropped 0.4 in the peer score, assuredly in part because the news remained fresh close in time to circulation of the survey.

St. Louis has never returned to a 2.4, but it has slowly improved since the drop and has stood at a 2.3 for the last few surveys.

Albany, 2015, 2.0 to 1.7. For years, Albany had held a 2.1 or 2.2 peer score. In 2013, that score settled to a 2.0 and remained there in 2014. That isn’t remarkable, because [scores lower]. But in 2015, the score dropped 0.3 to 1.7. In early 2014, the school made headlines for buyout proposals amidst financial exigency and faculty backlash. These were some of the first public signs of financial strain at U.S. law schools after the economic downturn—recall that enrollment jumped for the Class of 2012 dropped ever since. While many schools felt financial strains, few made it public—today, of course, many more have had their financial struggles made public.

The impact didn’t last long. By 2016 the school returned to a 1.9, and in 2017 a 2.0 again, which is its score this year, too.

Vermont Law School, 2.2 to 1.9, 2019. The most recent drop took place in the most recent rankings. In the summer of 2018, Vermont announced that 14 of its 19 tenured professors would lose tenure, an announcement just a few months before ballots went out. Time will tell what happens next year, but we should expect a small bounce back up.


This post isn’t really to shame any particular school or approve of how the peer rankings have reacted to scandals. It’s simply to note that some strong reactions do exist.

It also highlights the stickiness of the rankings. The cohort of voters can change fairly frequently. Voters include the dean, the associate dean of academics, the chair of faculty appointments, and the most recently tenured faculty member. Those positions change with some frequency—the typical dean’s tenure is 3 years, new faculty hires mean a steady stream of tenure grants, different appointment chairs as service commitments rotate, and so on. Nevertheless, the peer score remains tough to move. Smaller controversies, a USNWR mistake, or apparent randomness appear to have little staying power. But bigger scandals have prevented scores from ever returning to where they were before the scandal—even if the school has faced appropriate sanction and all the people involved have moved on. Whether it’s inertia or long punitive (and vindictive?) memories, the peer scores can remain depressed.

Importantly, I hope some law professors might reconsider why they may be voting the way they are. Are they voting because of the present state of the law school—its student body quality, its student outcomes, its faculty quality, its administrators, etc.—or because of some past act of the law school? By reflecting on why voters vote the way they do, we may see less (arguably) punitive voting.