In 2025, law school admissions practices continue to look at the LSAT like it's 2005
The LSAT is an important predictor of law school success. It does a very good job of predicting who will perform well in law school. The higher your LSAT score, the higher your law school grades are likely to be. It is not perfectly correlated, but it is well correlated. When combined with your undergraduate grade point average (UGPA)—yes, regardless of your major, grade inflation, school disparities, and all that—it can even further predict law school success.
But the LSAT has changed over the years. As has its weight in the USNWR rankings. Many law school admissions practices, however, look at the LSAT like it’s 2005—like the test scores resemble what they did back then, and like the USNWR rankings care about them like they did back then. A lot has changed in a generation.
Changes to reporting of LSAT scores and repeaters
The Law School Admission Council, which administers the LSAT, has long maintained that the most accurate predictor of law school grade point average (LGPA) is the average of all LSAT scores. If a prospective student takes the LSAT twice, and scores a 164 and a 168 on those two tests, the most accurate way of evaluating that student is to conclude that the student received a 166. And schools reported to the American Bar Association the average score.
But in 2006, the ABA changed what schools need to report. It allowed schools to report the highest LSAT score, not the average of LSAT scores. For this student, then, it is no longer a 166 but a 168—even though it is less accurate in terms of the predictive value of the LSAT.
This change incentivizes repeat test-takers. Back in 2010, about two-thirds of test-takers took the exam only once. More would be incentivized to repeat the exam. But, there was an upper limit on this. LSAC only administered the exam four time a year, and it only permitted test-takers to repeat up to three times in a two-year period.
But in 2017, LSAC lifted that ban and allowed unlimited retakes—it has since brought that number down to five in “reportable score period” (around 6 years) and seven overall, but still much more than three. It now offers around eight administrations of the LSAT each year, up from four.
Granted, the total number of people inclined to take the exam five times is quite small. But repeaters continue to climb. In 2023-2024, repeat test-takers formed a majority of the LSATs administered.
Additionally, about 20% of test-takers do not have a “reportable score.” That means, under an option from LSAC developed over the last two decades, you can “peek” at your score and decide to cancel after learning about the score, preventing schools from seeing your score. (This is mostly irrational behavior from prospective students because schools will still report the highest score received, but it further muddies the waters for identifying the difference between the highest score and the average score.)
So, if you are a law school inclined to look at the LSAT as a predictive tool of student success, you would want to use the average score. But now that the ABA permits reporting the highest score—and because the USNWR rankings likewise use that score—all of the incentives are to rely on the highest score in admissions decisions, even if it’s less accurate to predict student success. True, gains among repeaters tend to be modest, about 2 points for a typical test-taker. But, as I continue, there are cumulative effects to these changes.
Importantly, when LSAC measures the validity of the LSAT, it still measures it using the average score. Law schools, however, typically are using the higher score—and therefore opting to use a less valid measure of the LSAT.
(Likewise, the LSAT is more valid at predicting success when combined with UGPA in an “index score,” but most law schools also do not use it this way, again choosing to use a less valid method of relying upon in it—more on that later.)
Extra time to take the test
Any mention of accommodations in test-taking is a fraught topic. But I want to set aside whatever preferences you may have about the relationship between accommodations and test-taking. I want to point out what it means—and specifically, extra time on the exam—for using the LSAT as a predictive tool.
Data from LSAC shows that accommodated test-takers receive higher scores than non-accommodated test-takers, around four to five points. Most accommodations for test-takers translate into LSAT scores that predict law school success—for instance, a visually-impaired person receiving large-print materials will receive a score that fairly accurately predicts law school success. There is an exception, however, for time-related accommodations, and LSAT scores tend to overpredict law school success when there are time accommodations. Requests for additional time have increased dramatically over the years, from around 6000 granted requests in 2018-2019 to around 15,000 granted requests in 2022-2023.
My point here is certainly not to debate accommodations in standardized testing, but it is to point out that additional time on the LSAT makes it less predictive, and there has been a dramatic increase in such tests. In 2014, the Department of Justice entered into a consent decree with LSAC to stop “flagging” such LSAT scores. So there remains a cohort of LSAT scores, increasing by the year, that are less predictive of law school success.
A change in test composition
In 1998, a technical study from LSAC looked at each of the three components of the LSAT—the analytical reasoning (sometimes called “logic games”), logical reasoning, and reading comprehension. The LSAT overall predicted first-year LGPA. And each individual component contributed to that overall score. But in 2019, LSAC entered into a consent decree on a challenge that the analytical reasoning section ran afoul of federal and state accommodations laws. And in 2023 it announced the end of that section.
I have not yet seen any subsequent technical report from LSAC (perhaps it’s out there) explaining how it reached this conclusion that a test without logic games could be as valid as a predictive measure expressly after its 1998 report. But certainly anecdotes, like this one in the Wall Street Journal, suggest some material changes:
Tyla Evans had almost abandoned her law-school ambitions after struggling with the logic games section. “When I found out they were changing the test, I was ecstatic,” said Evans, a 2023 George Washington University graduate. Her LSAT score jumped 15 to 20 points on the revised test, enabling a second round of applications. So far, she has received two sizable financial-aid offers and is waiting to hear from a few more schools.
The LSAT has fundamentally changed in its content, and it suggests that scores today are not truly comparable to scores in previous eras, and that such scores will be less predictive of success.
Opting out of the test
One more slight confounding variable, although its effect on LSAT scores is more indirect and I’ll only mention briefly. More students are attending law school who have not taken the LSAT. This comes from a variety of sources—increasing cohorts of students who come directly from the law school’s parent university without an LSAT score; alternative admissions tests like the GRE; and so on. Publicly disclosed LSAT quartiles, then, conceal a cohort of students who have selected out of taking the LSAT. It is hard to know how this precisely affects the overall composition of LSAT test-takers, but it is one more small detail to note.
What the rankings use
A run-of-the-mill admissions office should care about LSAT scores as a predictor of law school success. Several developments in the last generation have diluted the power of the LSAT as a predictor—reliance on the highest score for repeat test-takers, unlimited retakes, additional time for some test-takers, a change in content, and a change in the cohort taking the exam. Nevertheless, despite all those changes, it is still a good predictor, or better than alternatives.
But this admissions office probably cares a great deal about something else too—law school rankings. Rightly or wrongly—again, not a debate for this post—law school rankings, particularly the USNWR rankings, play a great deal of influence in how prospective law students perceive of schools. Even marginal changes can significantly influence admissions decisions and financial aid costs, not to mention the effect on students, alumni, faculty, and the university as a barometer (again, rightly or wrongly) of the law school’s trajectory and overall health.
But USNWR does two important things with LSAT scores, one very public and recent, one more subtle and longstanding but often misunderstood.
First, USNWR has long used the median LSAT of an incoming class as a benchmark of the overall quality of the class (a decision long known to distort how law school admissions offices conduct their admissions practices as the “bottom” credentials of the incoming class looks very different from the “top”—more on that in a bit). But it changed its formula recently to focus more on outputs instead of inputs. That meant the weight it gave to the median LSAT score dropped from 11.25% of the rankings to just 5% of the rankings.
A related, and subtle, change is that by giving such significant weight to employment outcomes, now 33% of the rankings. It is not only a large category, but it is a category with a huge spread from top to bottom. That has the effect of diminishing the value of the other categories. Let me offer one comparison I raised earlier, in very simple terms: moving from a school ranked around 100th in the employment rankings to around 50th, which is typically the difference of getting an additional 3-4% of the class full-time legal jobs, is actually a bigger deal than moving from a 153 median LSAT score to a 175 LSAT score (which is a massive shift).
In short, the LSAT scores matter far less in the new rankings than they did before.
Second, USNWR has long used the percentile equivalents of LSAT scores, not the raw scores themselves. This is deceptive to the outsider, because there is such an emphasis on the raw LSAT score. But as scores get higher, they actually reflect narrower and narrower changes in improvement of a ranking’s perception of the incoming class.
LSAT scores look like a bell curve. In the middle of the distribution are where most of the scores fall, around 150 to 152. At each end of the curve are increasingly small numbers of students who get each score. So the move from 160 to 161 reflects a more significant improvement, relative to others, than the move from 170 to 171.
A quick chart will help illustrate the point using a recent LSAC percentile equivalent table. The gap from 173 to 172 is small, 0.65 percentage points. From 168 to 167 larger, 1.55 percentage points. From 164 to 163 still larger, 2.5 percentage points. And from 157 to 158, 3.42 percentage points.
This plays out the same way in the USNWR rankings (roughly, subject to, among other things, the fact that USNWR includes GRE and other scores in some schools’ rankings percentile equivalencies). Using my modeled rankings measure, the scores are scaled against other schools, and in their unweighted terms, the gap between a 172 and 173 is around 0.032 raw points; 168 to 167, 0.077; 164 to 163, 0.123; and 158 to 157, 0.142. (It isn’t so dramatic between 164-163 and 158-157 here, because most law schools cluster higher on the LSAT curve than the overall LSAT curve itself.)
These unweighted numbers are meant to give you some comparisons of raw points compared against each other. But recall that these figures only receive 5% weight in the overall rankings. So that 0.032 actually converts to 0.0016—a fraction of a hundredth of a point, a rounding error in many circumstances. Without getting into all of the details of how USNWR otherwise creates a rankings formula, it is, quite clearly, a very, very small number.
What this means, then, is that law schools perform “better” in the rankings each time the median LSAT of their incoming class increases, but the marginal value of each additional point increase diminishes.
How often do changes to LSAT median scores can alter a school’s ranking?
So, the big question is this. If many schools (and most school pursuing a more “elite” ranking) care about medians in LSAT scores, what tangible difference would a change in a median LSAT score make to a school’s ranking?
This should be a very straightforward question for most law schools. That is, most law schools should make a very basic model then run a cost-benefit analysis. That is, it’s a lot of effort to pursue a median LSAT score of X (including skewed admissions decisions, financial aid costs, etc.). Is it worth it?
For years, the answer has been unhesitatingly and unflinchingly “yes” at many law schools. Or, perhaps, a begrudging and inevitable “yes.” When the LSAT was a significant part of the rankings, it could make a big difference. And schools saw those numbers
But today, the value of the LSAT is dramatically lower than it was in the past. Schools should be reassessing whether this unflinching commitment to LSAT scores is worth it. But it turns out, schools are not reassessing—they are sticking to their old practices.
Median chasing in 2025
Below are a few charts from some top 30-ish schools from LSD.law, which continues a longstanding practice of creating a site for law school applicants to submit their individualized admissions decisions and allow them to be compared in the aggregate. I slightly cropped the charts to focus on the heartland of reported acceptances as of March 21, 2025.
The charts below show the UGPA and LSAT of accepted students. Left to right shows an increasing LSAT score; bottom to top shows an increasing UGPA.
The schools don’t particularly matter here. The point is that the admissions practices—or “strategy”—is essentially identical at every school, with some twists at various institutions (e.g., how they handle UGPA). There is a sharp drop-off to the left of some line that represents a school’s targeted “median” LSAT. Most accepted students to the left of that score are above the median UGPA line.
In short, just like schools back in 2013 and earlier, schools are “chasing” the median. They are ignoring high-caliber students with numbers that sit just below their medians in exchange for students who can help boost one side or the other of their medians (often, most starkly, as can be seen, the LSAT medians).
Do LSAT median changes affect the rankings?
This behavior may be rational if the LSAT matters for rankings purposes like it did in the past. But it doesn’t. It’s value has been significantly reduced. Additionally, schools ought to understand that marginal increases in LSAT score (which at the top end are extremely costly—there are fewer of those scores and more competition for them so financial aid packages can become quite costly) are even less valuable the higher the score. That said, a school might simply have the desire to get better, and the better the LSAT score, the more likely it helps an increase in the ranking.
That’s true, but how likely is it to happen?
I modeled the USNWR rankings and ran some counterfactuals to assess whether schools in a particular LSAT band would drop in the rankings. The results are quite telling.
I ran estimates for the 43 schools with median LSAT scores of 165 or higher. I estimated a one-point drop in their LSAT medians and assumed all other schools remained the same. Impressively, for 38 of the schools, the rankings did not change. It only changed for 5 schools. And for 2 of those 5 schools, they had a non-trivial cohort of GRE scores (which tend to be lower, often substantially lower) than LSAT medians, so it’s not as much a “true” comparison because their median LSAT was actually a lower percentile than peer schools.
I then took a quick look at the 17 schools with a 156 LSAT, and a whopping 10 of the 17 would see a drop—and outright majority. A jump of one point in the 156-158 range is worth about six times as much as a jump of one point in the 172 range and about twice as much in the 167 range.
Again, knowing what we know about percentile distributions, this should not be so shocking. The marginal value of climbing (or dropping) a point diminishes the higher you go. Likewise,
Perhaps even more surprising, however, is modeling a two-point drop for a given school. Even a two-point drop only adversely affects 15 of the 43 schools—that is, there’s a 2-in-3 chance that even a two point drop does not adversely affect the school among 165 and higher LSAT medians. And 4 of those 15 schools had a non-trivial GRE cohort (again, which tends to drop the value of the LSAT median in the first place). Unsurprisingly, on the whole, we see greater effects the farther down the LSAT score.
In short, if a school’s motivation in maximizing LSAT median is to improve its position in the rankings (and the revealed practices of admissions committees from LSD reflect this is the motivation), it needs to assess the cost of this decision. It is quite apparent that if a school were to give up a bit of ground in LSAT medians, it would, in most circumstances, not be adversely affected in the rankings. That should yield a profound change in admissions strategy—if the admissions strategy is designed fro 2025 instead of 2005.
Of course, if you are standing still while everyone else climbs a point, it could adversely affect you in the long run, and there are some drops in score that are too detrimental. Likewise, LSAT does predict law school success, so people with higher scores are going to be better students on the whole, so it can be valuable (although, as noted, a given score probably means less than it did before). And schools could be considering alternative admission strategies (e.g., alternative tests like the GRE, test-optional policies, etc.) that complicate this narrative. But, the point with respect to LSAT medians still stands.
The irrational pursuit of round numbers
One more data point on the potential irrational behavior with respect to admissions practices and LSAT scores. Here are the median LSAT figures for most schools and their distribution (how many schools had that median) for the incoming class in 2024.
Now, I am hardly an expert—or even a novice—in regression discontinuity analysis, but something strange happens around the numbers 150 and 160. Each reflect a significant spike from the total number of schools with a 149 and 159. That number jumps from 1 to 11 for 149 to 150; and from 4 to 16 for 159 to 160. True, the numbers are not a smooth bell curve overall and do not reflect a gentle increase and decrease. Nevertheless, there’s no jump up from one band to another like these two jumps. It is as if admissions offices—or perhaps their deans—are fixated on securing a round number. If that’s the case, it’s an even more absurd effort at median chasing (and frankly, many schools in the 150-ish range are not “chasing” the USNWR rankings). (Perhaps this is just a one-off, and we’ll see if such jumps occur in the future.)
Coda
Admissions strategies at a given school may differ for any number of reasons. At some institutions, leadership demands certain kinds of output and results, including maximizing any opportunity for rankings advancement. At other institutions, the status quo has been a successful formula in the past, and it’s a huge risk to suggest changing that. At still others, there isn’t anyone available to do a cost-benefit analysis or evaluate the tradeoffs and risk tolerance. And at still other institutions, well, I’m sure they think I’m wrong.
But let me recap. The LSAT, as a raw score, is less predictive of ability than it was 20 years ago. That is, a 170 or a 160 means less than it did 20 years ago. It may still be predictive in the aggregate. That is, a 170 means a higher likelihood of success than a 160. But there are error rates in that 170 that were unknown 20 years ago—the 170 likely overstates the “true” value compared to 20 years ago. Relatively speaking, and in terms of its validity as a statistical matter, it’s still valuable—it just has a different value than before.
Likewise, schools have continued to rely on the LSAT but used it in a way that makes it less predictive than it is designed to be—by relying on the highest score, for instance, or by refusing to use the index score. This is exacerbated by the fact that LSAC allows more retakes than it did a generation ago, and it allows cancellation of scores in mechanisms unknown a generation ago.
More recent developments, including the acceleration of extra time test-takers and the dropping of logic games from the LSAT, promise to further dilute the predictive validity of the LSAT in yet-unknown ways.
Schools have dabbled in GRE admissions or other alternative tests, or no score required for sufficiently high UGPAs from the law school’s home undergraduate institution. But back in 2023, I posited it that it was an “extraordinary moment” for law schools to rethink admissions. Some ideas I floated included:
Law schools could rely more heavily on the LSAC index, which is more predictive of student success, even if it means sacrificing a little of the LSAT and UGPA.
Law schools could seek out students in hard sciences, who traditionally have weaker UGPAs than other applicants.
Law schools can consider “strengthening” the “bottom” of a prospective class if it knows it does not need to “target” a median—it can pursue a class that is not “top heavy” or have a significant spread in applicant credentials from “top” to “bottom.”
Law schools can lean into need-based financial aid packages. If pursuit of the medians is not as important, it can afford to lose a little on the medians in merit-based financial aid and instead use some of that money for need-based aid.
Law schools could rely more heavily on alternative tests, including the GRE or other pre-law pipeline programs, to ascertain likely success if it proves more predictive of longer term employment or bar passage outcomes.
(There are other suggestions there, too, including interviewing prospective candidates or evaluating CVs for past employment experience, but these may well be more of a mixed bag.)
It seems, however, that schools are not changing their admissions practices. They are largely living in a 2005 universe of admissions.
Of course, any change carries risk. It is a time-consuming endeavor to pursue alternative admissions strategies, and reactive law schools likely do not have the infrastructure to do so as universities face hiring freezes. Even a marginal risk of declining in rank is too great a negative risk for some law schools—even if it might redound to schools in the long run with higher quality students, however the school chooses to measure them, rather than splitting LSAT and UGPA into quadrants and admitting on that basis. And alternative strategies could benefit schools in the long run on employment measures, but schools may be too risk averse to wait five years for that to play out.
But this extremely lengthy post is to point out, law school admissions practices are stuck in 2005. There is a significant uptick in prospective students, which ought to give schools the flexibility to think more creatively about law school admissions as they have more similarly-situated students to choose from. And it’s an open question which schools, if any, will step forward as the ones that don’t look at the LSAT like they did a generation ago.