What I got right (and wrong) in projecting the USNWR law rankings
In January, when I projected the new USNWR law rankings, I wrote, ”I wanted to give something public facing, and to plant a marker to see how right—or more likely, how wrong!—I am come spring. (And believe me, if I’m wrong, I’ll write about it!)”
With the rankings out, we can compare them to my projections.
A couple of assumptions were pretty good. Ultimate bar passage and student-librarian ratio were added or re-imagined factors. More weight was put on outputs. Less weight was put on the peer score.
But I thought USNWR would need to add some weight to admissions statistics to make up for the loss of other categories. I was wrong. They diminished those categories and added a lot—a lot—to add to outputs. Employed at 10 months rose from 14% to 33%. First-time bar passage rose from 3% to 18%. Those are massive changes. For reference, I thought a reasonable upper-bound for employment could be 30% and first-time bar passage 12%.
The model was still pretty good.
I got 13 of 100 schools exactly right—not great.
63 schools hit the range I projected them in—pretty good, but not great.
But 81/100 schools placed in the general trend I had for them—would they rise, fall, or stay in the same spot. Even if I missed, for almost all the schools I hit the right direction for them.
Again, part of this comes from the fact that so many factors correlate with one another that it’s relatively easy to spot trends, even if my models missed some of the biggest swings. But I also included some bigger swings in my factors, which I think also helped put the trajectory in the right place.
Barring changes to the methodology next year (sigh)….