The tension in measuring law school quality and graduating first generation law students

We (I’ll use pronoun here capaciously) know how most law school rankings are generated or how most assessments of law school quality (at least, as measured by law students) are developed. We look to the incoming metrics, the quality of the LSAT scores and undergraduate GPAs among incoming students. We look at attrition rates, including how many are academically dismissed or otherwise withdraw. We look at bar exam passage rates. We look at student debt loads, including those who graduate without debt. We assess “gold standards” in employment outcomes, specifically those who land positions in high-quality positions like full-time, long-term, bar passage-required positions; or those in “elite” outcomes like federal judicial clerkships or large law firm associates.

And while we might pick at elements of these rankings, we might look for “better” ways to measure quality. I think employment outcomes are a great standard. I also like to examine law school student debt loads.

But over the years, I keep wondering whether these “better” ways are, really, better. Prompted from a recent story about the USNWR “best colleges” ranking, I thought I’d muse about why.

Imagine two prospective students, Student A and Student B. Both have poor “predictors” of law school performance (below median LSAC index score, say). Student A is the child of a successful attorney in a mid-sized practice; or perhaps the parent founded and runs a small firm. Student B is a first-generation law student, perhaps a first-generation college student.

Both students will pay the sticker price for law school tuition and cost of attendance. They don’t earn “merit” scholarships. At many law schools, that can easily exceed $100,000, and often tops $200,000 or even nowadays $300,000.

Student A has the wealthy attorney-parent pay the bill; Student B secures, say, $150,000 in federal loans (if the school lacks much in the way of need-based aid, or if the student is just well-off enough to miss the cutoff).

Both students achieve what their predictors did predict: rather marginal law school performance. Job-hunting is tough for students with a bottom-quartile law school grade point average. After graduation, however, Student A gets a job at the parent’s law firm. Student B is left unemployed and searching, or perhaps in a marginally-attached job.

By a pair of the metrics I admire most—low debt loads and high-quality employment outcomes—Student A looks much better than Student B. But, isn’t it simply the path of least resistance to admit Student A over Student B and preserve legacy status? Or, if a school does well on these metrics, how often is it simply because of the cohort of students more closely resembles A over B?

It’s very expensive to a law school to help Student B succeed, both in reducing debt and in ensuring employment placement (or maybe as a prerequisite ensuring academic success).

I don’t have great answers at this point, except to say that I’m puzzling over the next level of data. I think law schools rightly ought to move away from focusing on inputs to focusing on outputs. (USNWR law rankings, not so much.) At the same time, I confess that moving to such measures offers their own limitations—at least, to the extent we think that we want to reward, say, schools for upward social mobility, or schools actually adding value to students as opposed to conferring status. I hope to think more about this in the years ahead.