Law school inputs, outputs, and rankings: a guide for prospective law students

As we approach another law school rankings season, Dean Paul Caron has compiled a tentative ranking of the “admissions” metrics that USNWR uses as a component of its law school rankings methodology. Median LSAT score of the incoming class, median UGPA of the incoming class, and acceptance rate are 25% of the rankings.

Interestingly, in my judgment, it’s also probably one of the readiest way for a prospective law student to judge which school are most overvalued and undervalued by USNWR.

Law school inputs are, I think, probably the very weakest measure of law school quality from a student’s perspective. Prospective students, I think, care far less about the academic credentials of those around them, and far more about, say, their employment outcomes, their debt levels, or even the profession’s perception of their institution. (Above the Law’s rankings long ago focused on outputs over inputs. Professor C.J. Ryan has also looked to “value-added” rankings, what law schools add value to the student experience.)

Indeed, it’s remarkable to me that just 20% of the rankings focus on employment and bar outcomes, while 25% on admissions statistics. We know law schools spend significant resources on distorting admissions practices to meet UNSWR metrics.

But if you’re a student, which is better? To be at a law school with a median LSAT of 170 but a 50% high-quality job placement rate? Or at a law school with a median LSAT of 160 but an 80% high-quality job placement rate? One could look at the same figures for students who graduate with a low debt-to-income ratio, too.

Admissions-centered rankings, then, can help a prospective law student discern which schools are overvalued and undervalued by existing rankings, and discount accordingly. If two schools sit beside each other in the USNWR ranking, it might be because one has much better inputs and another much better outputs. Or if there are two schools that appear to disparate, it might be only because of a disparity of inputs, not outputs.

This isn’t to say that law school inputs are unimportant. They are important—to law schools, not (mostly) to law students. They are important to predict likelihood of success in law school, so law schools want to admit students with high likelihood of success. (For marginal students admitted to a school, it might be relevant to them as an indicator of the challenges they may face in the first-year curriculum in particular.)

But those figures aren’t generally, in my judgment, useful for prospective law students. Separating the components of the rankings can provide better information in decisionmaking.