Excess of Democracy

View Original

Some questions about mashups of law school rankings

Each year, the Princeton Review surveys law students around the country and uses those surveys to create eleven rankings lists. (Three other lists are based on school-reported data.) The data includes factors like “Best Classroom Experience” and “Most Competitive Students.” I’ve previously noted that I think these surveys are pretty good because they are designed not to be “comprehensive” but rate schools in different categories, even if a fairly black box methodology.

One oft-shared survey among law professors is “Best Professors.” That ranking, according to the Princeton Review, is “Based on student answers to survey questions concerning how good their professors are as teachers and how accessible they are outside the classroom.” Here’s the “top 10” as reported by the Princeton Review for 2020:

  1. University of Virginia

  2. Duke University

  3. University of Chicago

  4. Washington & Lee University

  5. Stanford University

  6. University of Notre Dame

  7. Boston College

  8. Boston University

  9. University of Michigan

  10. Northwestern University

Dean Paul Caron over at TaxProf approaches it somewhat differently. He takes two of the Princeton Review categories, “Professors: Interesting” (based on a student survey of “the quality of teaching”), and “Professors: Accessible” (based on a student survey of “the accessibility of law faculty members”), which Princeton Review rates on a scale of 60 to 99. It yields a different top 10:

1. University of Virginia

2. University of Alabama

University of Chicago

Duke University

Pepperdine University

Stanford University

Washington & Lee University

8. Charleston

Notre Dame

Regent

Vanderbilt

It’s an interesting comparison—ostensibly, the “Best Professors” rating is based on how good the professors are as teachers (i.e., quality), and accessibility. But mashing up two sets of categories from Princeton Review on similar topics yields different results: Alabama, Pepperdine, Charleston, Regent, and Vanderbilt go from “unranked” to the “top 10” of Dean Caron’s mashup rankings; Boston College, Boston University, the University of Michigan, and Northwestern University drop out of the “top 10.” What’s different about the Princeton Review’s judgment of “Best Professors” than simply a mashup of two categories?

Dean Caron goes on to create a new “overall law school ranking,” with “equal weight” to five categories: (1) selectivity in admissions, (2) academic experience, (3) professor teaching quality, (4) professor accessibility, and (5) career rating. His rankings give 20% weight to each of the five categories noted above.

(Princeton Review actually disclaims doing this with its information:)

Note: we don't have a "Best Overall Academics" ranking list nor do we rank the law schools 1 to 167 on a single list because we believe each of the schools offers outstanding academics. We believe that hierarchical ranking lists that focus solely on academics offer very little value to students and only add to the stress of applying to law school.

So this raises a new question: are these five categories of equal value to prospective students? Specifically, is "faculty accessibility” worth equal parts with “career rating”—that is, “the confidence students have in their school's ability to lead them to fruitful employment opportunities, as well as the school's own record of having done so”? I’m not terribly convinced these are of equal value.

For the Princeton Review, “career rating” actually includes several judgments compiled into a single index—practical experience, externship opportunities, student perceptions of preparedness, graduate salaries, bar passage-required jobs, and bar passage rates, all lumped into one 60-99 score.

Any ranking that takes multiple components into a single result requires value judgments about how to weigh those components. USNWR ranking, for instance, places about 22% of the overall score on career and bar outcomes. Above the Law puts all of its metrics on outcomes, mostly employment-related outcomes.

Of course, I bracket all this to say that I of course do not suggest that professor accessibility is unimportant. (If you’ve ever seen me at one of the four law schools I’ve taught at, you’ll see I’m physically in the building a lot—I value accessibility and making myself accessible tremendously!) But whether it’s of equal importance is something more puzzling for me. It’s valuable to students—how valuable? How valuable compared to other things?

Professor Paul Gowder suggested that perhaps it’s valuable to have measures that are not simply highly correlated with one another, as so many rankings metrics tend to be, such that the rankings can truly measure lots of different things. It’s a fair point, and one to consider: faculty accessibility is negatively correlated (-0.12 by this measure) with career outcomes. Of course, as I ran these figures, this also seems a bit strange (in my view)—I might think that better faculty access (including mentoring!) would lead to better career counseling outcomes and better bar passage rates. Additionally, Professor Rob Anderson noted that there may be questions about how we define “faculty accessibility” and whether some faculty (like professors with childcare responsibilities) are disproportionately affected.

Other Princeton Review measures are more highly correlated with career outcomes, like “teaching quality (0.36) and “academic experience (0.56). Of course, least surprising is that “admissions selectivity” (0.71) is the most highly correlated with best career outcomes.

All this is to say, I appreciate when folks lpress forward with alternative measures of evaluating law schools and comparing law schools to one another. Personally, I think we don’t we spend enough time evaluating a lot of important things about law schools, and I’ve tried to put some of them here on this blog: debt-to-income ratios, median debt loads, optimal employment outcomes, the role of school-funded jobs, and so on.

Each measure, though, like the Princeton Review rankings, is hard to compare to one another without making judgments about how to weigh them. Maybe we should weigh a series of factors equally; maybe there are reasons not to. And I confess I have far more questions than answers! But it’s also a reason to wonder whether “comprehensive” or mashup rankings are as valuable as a series of discrete evaluation categories that offers opportunities for students to assess how valuable individual components are.