USNWR incorporates faculty citations, graduate salary, debt data into its college metrics--will law schools be next?

USNWR has many rankings apart from its graduate school (and specifically law school) rankings, of course. (One of my favorites is its ranking of diets.) Its collegiate rankings have been around for a long time and have been influential, and because it is a higher education ranking, it is useful to see what USNWR is doing with it in case it portends future changes elsewhere.

USNWR has bifurcated some of its methodology. For “national universities,” it uses some different factors from other schools it ranks. (Law schools are all ranked together in one lump.) And this year included three notable changes in some or all of the rankings—notable, for this blog’s purposes.

First, debt.

Borrower debt: This assesses each school's typical average accumulated federal loan debt among only borrowers who graduated. It was sourced from the College Scorecard, a portal of higher education data administered by the U.S. Department of Education.

. . .

In previous editions, the data was sourced from U.S. News' financial aid surveys and assessed mean debt instead of median debt. There are two reasons behind this change. One is that 50th percentile amounts are more representative than average amounts because they are less impacted by outliers. The other is that College Scorecard's data is sourced from its department's National Student Loan Data System (NSLDS), which keeps records of federal loan disbursements and therefore is a more direct source of information than school-reported data.

As readers of this blog know, I’ve long used similar metrics for law schools on this blog and have found them useful. And readers may recall that USNWR used to collect debt data; incorporated it in the last two years’ of rankings; and then stopped this year with the rise of the “boycott.” Law schools stopped voluntarily reporting indebtedness. So USNWR dropped it for only publicly available information.

The College Scorecard is publicly available. It offers this debt data for USNWR to use. Will USNWR incorporate it in next year’s rankings? It remains a distinct possibility, as the note above suggests.

Second, citations.

To be grouped in the National Universities ranking, an institution must be classified in the Carnegie Classifications as awarding doctorate-level degrees and conducting at least "moderate research." In alignment with these schools' missions, U.S. News introduced four new faculty research ranking factors based on bibliometric data in partnership with Elsevier. Although research is much less integral to undergraduate than graduate education – which is why these factors only contribute 4% in total to the ranking formula – undergraduates at universities can sometimes take advantage of departmental research opportunities, especially in upper-division classes. But even students not directly involved in research may still benefit by being taught by highly distinguished instructors. Also, the use of bibliometric data to measure faculty performance is well established in the field of academic research as a way to compare schools.

Only scaled factors were used so that the rankings measure the strength and impact of schools' professors on an individual level instead of the size of the university. However, universities with fewer than 5,000 total publications over five years were discounted on a sliding scale to reduce outliers based on small cohort sizes, and to require a minimum quantity of research to score well on the factors. The four ranking factors below reflect a five-year window from 2018-2022 to account for year-to-year volatility.

Citations per publication is total citations divided by total publications. This is the average number of citations a university’s publications received. The metrics are extracted from SciVal based on Elsevier’s Scopus® Data.

Fields weighted citation impact is citation impact per paper, normalized for field. This means a school receives more credit for its citations when in fields of study that are less widely cited overall. The metrics are extracted from SciVal based on Elsevier’s Scopus® Data.

The share of publications cited in the top 5% of the most cited journals. The metrics are extracted from SciVal based on Elsevier’s Scopus® Data.

The share of publications cited in the top 25% of the most cited journals. The metrics are extracted from SciVal based on Elsevier’s Scopus® Data.

Each factor is calculated for the entire university. The minority of universities with no data on record for an indicator were treated as 0s. The Elsevier Research Metrics Guidebook has detailed explanations of the four indicators used.

Elsevier, a global leader in information and analytics, helps researchers and health care professionals advance science and improve health outcomes for the benefit of society. It does this by facilitating insights and critical decision-making for customers across the global research and health ecosystems. To learn more, visit its website.

USNWR had considered using citation metrics from Hein for law school rankings years ago. I tried to game it out to show how it may not change much, as it was fairly closely related to overall peer score, but that it could affect how the overall rankings look because of the gap in citation metrics as opposed to peer score. But like Hein, here USNWR outsourced the citations to Elsevier’s Scopus.

I do not know if USNWR would choose to use Scopus (which has a much smaller set of legal citations than other databases. (I believe Scopus records less than 10% of the citations that Westlaw and Google Scholar have for my work, as one example.) But USNWR’s willingness to engage with scholarship for national universities suggests it might consider doing the same for law schools. Of course, law schools are ranked together, as opposed to “research” law schools and “teaching” law schools, for lack of better terms here.

Third, salaries.

College grads earning more than a high school grad (new): This assesses the proportion of a school's federal loan recipients who in 2019-2020 – four years since completing their undergraduate degrees – were earning more than the median salary of a 25-to-34-year-old whose highest level of education is high school.

The statistic was computed and reported by the College Scorecard, which incorporated earnings data from the U.S. Department of the Treasury. Earnings are defined as the sum of wages and deferred compensation from all W-2 forms received for each individual, plus self-employment earnings from Schedule SE. The College Scorecard documented that the median wage of workers ages 25-34 that self-identify as high school graduates was $32,000 in 2021 dollars. The vast majority of jobs utilizing a college degree, even including those not chosen for being in high-paying fields, exceed this threshold.

The data only pertained to college graduates and high school graduates employed in the workforce, meaning nongraduates, or graduates who four years later were continuing their education or simply not in the workforce, did not help or hurt any school.

U.S. News assigned a perfect score for the small minority of schools where at least 90% of graduates achieved the earnings threshold. Remaining schools were assessed on how close they came to 90%. The cap was chosen to allow for a small proportion of graduates to elect low-paying jobs without negatively impacting a school's ranking.

The ranking factor's 5% weight in the overall ranking formula equals the weight for borrower debt, because both earnings and debt are meaningful post-graduate outcomes.

This is something like the flip side of the debt question, which I’ve also written about, again from publicly available data. And it would solve some of the problems that USNWR has in conflating a lot of job categories into one, or weighting them by some arbitrary percentages.

All three are fairly interesting—and, might I say, on the whole, good—additions to the collegiate rankings. Yes, like with any metric, one can quibble about the weights given to them, and how any factors can be gamed.

But I am watching closely now to see how USNWR might incorporate factors like these in its next round of law school rankings. If that’s true, these projected rankings I offered this spring aren’t worth much.