The LSAT is an important predictor of law school success. It does a very good job of predicting who will perform well in law school. The higher your LSAT score, the higher your law school grades are likely to be. It is not perfectly correlated, but it is well correlated. When combined with your undergraduate grade point average (UGPA)—yes, regardless of your major, grade inflation, school disparities, and all that—it can even further predict law school success.
But the LSAT has changed over the years. As has its weight in the USNWR rankings. Many law school admissions practices, however, look at the LSAT like it’s 2005—like the test scores resemble what they did back then, and like the USNWR rankings care about them like they did back then. A lot has changed in a generation.
Changes to reporting of LSAT scores and repeaters
The Law School Admission Council, which administers the LSAT, has long maintained that the most accurate predictor of law school grade point average (LGPA) is the average of all LSAT scores. If a prospective student takes the LSAT twice, and scores a 164 and a 168 on those two tests, the most accurate way of evaluating that student is to conclude that the student received a 166. And schools reported to the American Bar Association the average score.
But in 2006, the ABA changed what schools need to report. It allowed schools to report the highest LSAT score, not the average of LSAT scores. For this student, then, it is no longer a 166 but a 168—even though it is less accurate in terms of the predictive value of the LSAT.
This change incentivizes repeat test-takers. Back in 2010, about two-thirds of test-takers took the exam only once. More would be incentivized to repeat the exam. But, there was an upper limit on this. LSAC only administered the exam four time a year, and it only permitted test-takers to repeat up to three times in a two-year period.
But in 2017, LSAC lifted that ban and allowed unlimited retakes—it has since brought that number down to five in “reportable score period” (around 6 years) and seven overall, but still much more than three. It now offers around eight administrations of the LSAT each year, up from four.
Granted, the total number of people inclined to take the exam five times is quite small. But repeaters continue to climb. In 2023-2024, repeat test-takers formed a majority of the LSATs administered.
Additionally, about 20% of test-takers do not have a “reportable score.” That means, under an option from LSAC developed over the last two decades, you can “peek” at your score and decide to cancel after learning about the score, preventing schools from seeing your score. (This is mostly irrational behavior from prospective students because schools will still report the highest score received, but it further muddies the waters for identifying the difference between the highest score and the average score.)
So, if you are a law school inclined to look at the LSAT as a predictive tool of student success, you would want to use the average score. But now that the ABA permits reporting the highest score—and because the USNWR rankings likewise use that score—all of the incentives are to rely on the highest score in admissions decisions, even if it’s less accurate to predict student success. True, gains among repeaters tend to be modest, about 2 points for a typical test-taker. But, as I continue, there are cumulative effects to these changes.
Importantly, when LSAC measures the validity of the LSAT, it still measures it using the average score. Law schools, however, typically are using the higher score—and therefore opting to use a less valid measure of the LSAT.
(Likewise, the LSAT is more valid at predicting success when combined with UGPA in an “index score,” but most law schools also do not use it this way, again choosing to use a less valid method of relying upon in it—more on that later.)
Extra time to take the test
Any mention of accommodations in test-taking is a fraught topic. But I want to set aside whatever preferences you may have about the relationship between accommodations and test-taking. I want to point out what it means—and specifically, extra time on the exam—for using the LSAT as a predictive tool.
Data from LSAC shows that accommodated test-takers receive higher scores than non-accommodated test-takers, around four to five points. Most accommodations for test-takers translate into LSAT scores that predict law school success—for instance, a visually-impaired person receiving large-print materials will receive a score that fairly accurately predicts law school success. There is an exception, however, for time-related accommodations, and LSAT scores tend to overpredict law school success when there are time accommodations. Requests for additional time have increased dramatically over the years, from around 6000 granted requests in 2018-2019 to around 15,000 granted requests in 2022-2023.
My point here is certainly not to debate accommodations in standardized testing, but it is to point out that additional time on the LSAT makes it less predictive, and there has been a dramatic increase in such tests. In 2014, the Department of Justice entered into a consent decree with LSAC to stop “flagging” such LSAT scores. So there remains a cohort of LSAT scores, increasing by the year, that are less predictive of law school success.
A change in test composition
In 1998, a technical study from LSAC looked at each of the three components of the LSAT—the analytical reasoning (sometimes called “logic games”), logical reasoning, and reading comprehension. The LSAT overall predicted first-year LGPA. And each individual component contributed to that overall score. But in 2019, LSAC entered into a consent decree on a challenge that the analytical reasoning section ran afoul of federal and state accommodations laws. And in 2023 it announced the end of that section.
I have not yet seen any subsequent technical report from LSAC (perhaps it’s out there) explaining how it reached this conclusion that a test without logic games could be as valid as a predictive measure expressly after its 1998 report. But certainly anecdotes, like this one in the Wall Street Journal, suggest some material changes:
Tyla Evans had almost abandoned her law-school ambitions after struggling with the logic games section. “When I found out they were changing the test, I was ecstatic,” said Evans, a 2023 George Washington University graduate. Her LSAT score jumped 15 to 20 points on the revised test, enabling a second round of applications. So far, she has received two sizable financial-aid offers and is waiting to hear from a few more schools.
The LSAT has fundamentally changed in its content, and it suggests that scores today are not truly comparable to scores in previous eras, and that such scores will be less predictive of success.
Opting out of the test
One more slight confounding variable, although its effect on LSAT scores is more indirect and I’ll only mention briefly. More students are attending law school who have not taken the LSAT. This comes from a variety of sources—increasing cohorts of students who come directly from the law school’s parent university without an LSAT score; alternative admissions tests like the GRE; and so on. Publicly disclosed LSAT quartiles, then, conceal a cohort of students who have selected out of taking the LSAT. It is hard to know how this precisely affects the overall composition of LSAT test-takers, but it is one more small detail to note.
What the rankings use
A run-of-the-mill admissions office should care about LSAT scores as a predictor of law school success. Several developments in the last generation have diluted the power of the LSAT as a predictor—reliance on the highest score for repeat test-takers, unlimited retakes, additional time for some test-takers, a change in content, and a change in the cohort taking the exam. Nevertheless, despite all those changes, it is still a good predictor, or better than alternatives.
But this admissions office probably cares a great deal about something else too—law school rankings. Rightly or wrongly—again, not a debate for this post—law school rankings, particularly the USNWR rankings, play a great deal of influence in how prospective law students perceive of schools. Even marginal changes can significantly influence admissions decisions and financial aid costs, not to mention the effect on students, alumni, faculty, and the university as a barometer (again, rightly or wrongly) of the law school’s trajectory and overall health.
But USNWR does two important things with LSAT scores, one very public and recent, one more subtle and longstanding but often misunderstood.
First, USNWR has long used the median LSAT of an incoming class as a benchmark of the overall quality of the class (a decision long known to distort how law school admissions offices conduct their admissions practices as the “bottom” credentials of the incoming class looks very different from the “top”—more on that in a bit). But it changed its formula recently to focus more on outputs instead of inputs. That meant the weight it gave to the median LSAT score dropped from 11.25% of the rankings to just 5% of the rankings.
A related, and subtle, change is that by giving such significant weight to employment outcomes, now 33% of the rankings. It is not only a large category, but it is a category with a huge spread from top to bottom. That has the effect of diminishing the value of the other categories. Let me offer one comparison I raised earlier, in very simple terms: moving from a school ranked around 100th in the employment rankings to around 50th, which is typically the difference of getting an additional 3-4% of the class full-time legal jobs, is actually a bigger deal than moving from a 153 median LSAT score to a 175 LSAT score (which is a massive shift).