Imagine you had a tool to predict the future. You'd probably use it. A lot, in fact, especially if that tool predicted success in your industry.
Then, one day, you abruptly stop using that tool. It would probably mean some combination of the following: a better tool for predicting success; a decline in quality of that tool; some significant negative side effect from using that tool; a lack of concern for learning the predictive value offered by that tool; or an alternative advantage that might be gained only if the tool is not used.
For the LSAT, the latter four reasons have illustrated the slow, steady decline of its use.
A decline in the quality of that tool
The LSAT has long been deemed an extremely reliable test. Reliable, in that it highly and consistently correlates with first-year law school grade point averages. (For numerous studies, see the LSAC reports.) It uses item response theory, which allows the scores to reflect similar quality over time--a 170 on each test looks roughly the same, regardless of the month or year in which the test is taken.
The LSAT is even better when combined with a prospective law student's undergraduate GPA. And, if a school so desires, it can obtain a formula from LSAC indicating an optimal "index formula" that weighs LSAT and UGPA appropriately to find the best fit for a law school's first year grading distribution.
The LSAT, however, has lost some of this quality.
For many years, schools generally disclosed and relied upon the average of LSAT scores from a single applicant. LSAT studies, after all, revealed that the average is the most predictive of the applicant's ability, not the high or the low score. In 2006, however, the American Bar Association decided to request that schools report the high scores, not the average scores, of applicants. Despite the lower predictive value of reporting the high score, schools have increasingly pursued these high-end scores.
Additionally, the LSAC recently entered a consent decree to stop flagging LSAT scores earned through accommodated test-taking, and making it easier to secure accommodated test-taking. Because LSAC only finds that its scores secured during ordinary conditions are reliable, the consent decree means that the LSAT scores that schools obtain will have lower value.
Some significant negative side effect from using that tool
When U.S. News & World Report calculates its rankings of law schools, one-eighth of its entire score is based on a single LSAT score: the median incoming student. This creates significant distortions in how law schools secure incoming classes. Schools pursue that median LSAT score, despite the more promising index score it might otherwise use. Even more troubling, LSAT takers are fewer and fewer, making scores more difficult to obtain.
As a result, schools have an incentive to avoid the negative side effect from declines in their LSAT median, which might result in a decline in their USNWR rank. And so, as reported in recent reports, schools have begun to admit a non-trivial number of students without that score. Really, the new trend is not new, but several years old--instead, it's a trend begun by new interpretations of regulations that permit alternative metrics, such as SAT or ACT scores, to evaluate incoming students.
Of course, there's no data indicating the reliability of SAT or ACT scores correlating with first year grades, or how to index those scores with undergraduate GPA for an even more reliable picture. But the negative externality--the risk of median declines and a corresponding USNWR hit--is too great a cost. (You'll note, then, that the use of SAT or ACT scores is not, as one might say, a "better tool for predicting success." It is not a tested method at all.)
A lack of concern for learning the predictive value offered by that tool
It might have been the case that the LSAT was valued by admissions departments because it was a way of guessing success. Better students would be at a lower risk of dropping out or failing out. Better students would have a better chance at passing the bar and earning desirable employment outcomes.
But if those metrics are less valuable than other concerns--such as today's LSAT profile for an incoming class over the profile of a graduating class in three years or its employment profile in four years--then schools push them aside. It's not that schools are unconcerned with first-year student success-they undoubtedly are. It is simply that such concerns necessarily lessen if the obsession over an LSAT median--rather than the depth of the class, given the abrupt decline in the 25th percentile at many schools--is heightened.
An alternative advantage that might be gained only if the tool is not used
These are, of course, rather rankings-centric views. But there's also an advantage to be gained in refusing to use LSAT scores for prospective students. If a school is one of the only, or one of the few, doing so, it is a very strong enticement for the, let's face it, lazy prospective law student: forgo taking the LSAT, forgo opportunities at most other law schools in America, and effectively commit to a school without an LSAT requirement (assuming other metrics, like GPA and a "comparable" SAT or ACT score, have been met).
It's a decisive recruiting advantage, particularly for a law school seeking to attract candidates from its home undergraduate institution, a baked-in base likely inclined to attend the same law school anyway. Sure, students lose options elsewhere, but they save the time and financial cost of LSAT preparation and agony. It might be, of course, that this incentivizes all of the wrong sorts of students, but that might be a matter of perspective, depending on whether one views the LSAT as an unnecessary hoop or an objective measure of likely future performance.
The LSAT, then, is not abruptly dying. It has been experiencing nicks and scrapes for a decade now, and an increasing number of factors, both internal to LSAC and external to the market for legal education, have put it in a precarious position of slow and steady decline.