Visualizing legal employment outcomes in Florida in 2018

This is the fifth in a series of visualizations on legal employment outcomes for the Class of 2018. Following posts on outcomes in Illinois, Pennsylvania, Texas, and Ohio, here is a visualization for legal employment outcomes of graduates of Florida law schools for the Class of 2018. (More about the methodology is available at the Illinois post.) Last year's Florida post is here.

The Florida report is one of the most positive employment reports so far. Total bar passage-require jobs rose significantly, from 1364 in 2017 to 1430 in 2018. A small uptick in J.D.-advantage placement and a small drop in total graduates resulted in an improvement for placement from 69.8% to 74.5%, which brings placement more in line with others states.

As always, please notify me of any corrections or errata.

Peer Score School 2018 YoY% BPR JDA LSF Grads 2017 BPR JDA LSF Grads
3.3 University of Florida 88.4% 2.0 258 24 1 320 86.4% 246 33 1 324
3.1 Florida State University 85.8% 5.5 150 18 1 197 80.3% 150 21 0 213
2.8 University of Miami 84.3% 1.6 249 36 0 338 82.7% 195 20 0 260
2.1 Stetson University 83.3% 6.1 168 21 0 227 77.2% 167 36 0 263
2.1 Florida International University 82.9% 2.2 109 12 0 146 80.6% 112 13 0 155
1.2 Barry University 67.9% 6.2 91 38 0 190 61.7% 75 25 0 162
1.2 Ave Maria School of Law 65.7% 16.3 39 7 0 70 49.4% 32 8 0 81
1.5 St. Thomas University 64.3% 5.7 111 6 0 182 58.6% 99 7 0 181
1.2 Florida Coastal School of Law 60.8% 14.1 94 19 0 186 46.6% 97 14 0 238
1.6 Nova Southeastern University 57.5% -4.9 109 17 0 219 62.4% 129 12 0 226
1.5 Florida A&M University 50.4% -1.5 52 14 0 131 51.9% 62 7 0 133

Visualizing legal employment outcomes in Texas in 2018

This is the third in a series of visualizations on legal employment outcomes for the Class of 2018. Following posts on outcomes in Illinois and Pennsylvania, here is a visualization for legal employment outcomes of graduates of Texas law schools for the Class of 2018. (More about the methodology is available at the Illinois post.) Last year's Texas post is here.

Total jobs improved notably in bar passage required jobs, from 1321 to 1366. J.D.-advantage jobs declined. Total graduates increased to 1979, up from 1959. Placement improved from 75.8% to 76.3%.

As always, please notify me of any corrections or errata.

Peer Score School 2018 YoY% BPR JDA LSF Grads 2017 BPR JDA LSF Grads
4.1 University of Texas-Austin 92.8% 6.0 238 15 6 279 86.9% 259 18 7 327
2.4 Baylor University 89.0% 2.8 100 5 0 118 86.2% 107 4 1 130
2.7 Southern Methodist University 87.1% 4.0 192 17 0 240 83.1% 179 17 0 236
1.9 Texas Tech University 85.8% 7.6 125 8 0 155 78.2% 137 17 0 197
2.7 University of Houston 85.4% 7.9 171 22 0 226 77.5% 153 26 0 231
2.4 Texas A&M University 81.9% 7.6 93 20 0 138 74.3% 118 18 0 183
1.6 South Texas College of Law Houston 67.8% 6.0 160 24 1 273 61.7% 157 27 0 298
1.6 St. Mary's University 62.5% -8.4 129 11 0 224 70.9% 118 20 1 196
nr University of North Texas Dallas 57.9% 6.4 76 8 0 145 51.5% 17 0 0 33
1.4 Texas Southern University 48.6% -17.0 82 6 0 181 65.6% 76 8 0 128

Visualizing legal employment outcomes in Pennsylvania in 2018

This is the second in a series of visualizations on legal employment outcomes for the Class of 2018. Following a post on outcomes in Illinois, here is a visualization for legal employment outcomes of graduates of Pennsylvania law schools for the Class of 2018. (More about the methodology is available at the Illinois post.) Last year's Pennsylvania post is here.

Total graduates were down slightly year-over-year, and the job picture improved a little, with 84.4% employed in bar passage required and J.D. advantage positions, including 11 school-funded positions. Total placement fell slightly, so the improvement is attributable to the slightly smaller number of graduates, from 1256 to 1238.

As always, please notify me of any corrections or errata. UPDATE: Duquesne’s employment data was accidentally overstated in an earlier post and has been edited to present the accurate data.

emppa2018.png
Peer Score School 2018 YoY% BPR JDA LSF Grads 2017 BPR JDA LSF Grads
4.4 University of Pennsylvania 97.9% -0.9 216 12 10 243 98.8% 232 16 5 256
2.5 Villanova University 88.2% 5.8 127 15 0 161 82.4% 120 11 0 159
2.2 Pennsylvania State - Dickinson Law 87.3% 8.6 51 4 0 63 78.7% 41 7 0 61
2.1 Drexel University 83.7% 2.3 95 13 0 129 81.5% 88 13 0 124
2.7 Temple University 83.3% -3.4 161 13 0 209 86.6% 172 15 1 217
1.8 Duquesne University 80.0% 0.6 86 13 0 120 79.4% 80 20 0 126
2.3 Penn State Law 80.0% 1.9 86 13 1 125 78.1% 82 7 0 114
2.7 University of Pittsburgh 71.9% -5.7 85 12 0 135 77.5% 87 20 0 138
1.7 Widener Commonwealth 62.3% -4.9 32 1 0 53 67.2% 36 5 0 61

Visualizing legal employment outcomes in Illinois in 2018

Following up on a series of posts last year (and previous years), this is the first in a series visualizing employment outcomes of law school graduates from the Class of 2018. The U.S. News & World Report ("USNWR") rankings recently released, which include data for the Class of 2017, are already obsolete. The ABA will release the information soon, but individualized employment reports are available on schools' websites.

The USNWR prints the "employed" rate as "all jobs excluding positions funded by the law school or university that are full-time, long-term and for which a J.D. and bar passage are necessary or advantageous ." It does not give "full weight" in its metrics to jobs that were funded by the law school. USNWR gives other positions lower weight, but these positions are not included in the ranking tables. And while it includes J.D. advantage positions, there remain disputes about whether those positions are actually as valuable as bar passage required jobs. (Some have further critiqued solo practitioners being included in the bar passage required statistics.) Nonetheless, as a top-level category, I looked at these “full weight” positions.

The top chart is sorted by non-school-funded jobs (or "full weight" positions). The visualization breaks out full-time, long-term, bar passage required positions (not funded by the school); full-time, long term, J.D.-advantage positions (not funded by the school); school funded positions (full-time, long-term, bar passage required or J.D.-advantage positions); and all other outcomes. I included a breakdown in the visualization slightly distinguishing bar passage required positions from J.D.-advantage positions, even though both are included in "full weight" for USNWR purposes (and I still sort the chart by "full weight" positions).

The table below the chart breaks down the raw data values for the Classes of 2017 and 2018, with relative overall changes year-over-year. Here, I used the employment rate including school-funded positions, which USNWR used to print but no longer does; nevertheless, because there are good-faith disputes, I think, about the value of school-funded positions, I split the difference—I excluded them in the sorting of the bar graphs, and included them comparatively in the tables. The columns beside each year break out the three categories in the total placement: FTLT unfunded bar passage required ("BPR"), FTLT unfunded J.D. advantage ("JDA"), and FTLT law school funded BPR & JDA positions ("LSF"). This year, I also added the total graduates. (My visualization is limited because the bar widths for each school are the same, even though schools vary greatly in size, and that means raw placement might be more impressive considering class size.)

The first state is Illinois (last year's visualization here). There were 1696 statewide grades, a 3% decline over last year's class. The total placement rate among the graduates was 82% (including a few school-funded jobs). It is, once again, a slight improvement over last year driven by the smaller class size. Placement in bar passage required jobs fell slightly again.

As always, if I made a mistake, please feel free to email me or comment; I confess there are always risks in data translation, and I am happy to make corrections.

Peer Score School 2018 YoY% BPR JDA LSF Grads 2017 BPR JDA LSF Grads
4.7 University of Chicago 98.1% 0.4 188 4 10 206 97.7% 197 4 8 214
4.2 Northwestern University (Pritzker) 96.9% 3.0 205 12 5 229 94.0% 204 24 5 248
3.2 University of Illinois-Urbana-Champaign 91.9% 5.3 118 19 0 149 86.6% 111 12 0 142
2.6 Loyola University Chicago 85.5% 8.0 119 46 0 193 77.5% 138 34 0 222
2.6 Illinois Institute of Technology (Chicago-Kent) 81.0% 7.0 149 39 0 232 74.0% 133 32 0 223
2.2 DePaul University 73.5% 3.9 126 40 0 226 69.6% 126 34 0 230
1.7 The John Marshall Law School 67.5% -4.1 151 33 1 274 71.6% 161 43 0 285
1.6 Southern Illinois University-Carbondale 67.3% 8.7 63 11 0 110 58.6% 64 4 0 116
1.6 Northern Illinois University 66.2% -10.9 43 8 0 77 77.1% 42 11 1 70

February 2019 MBE bar scores bounce back from all-time lows

After cratering to all-time record lows last year, scores on the February administration of the Multistate Bar Exam have bounced back. It’s good news, but modest—the rise returns to scores from February 2017, which were at that time the lowest in history. Scores have now bounced back to match the second-lowest total in history… which is slightly better.

To be fair (which is not to say I’ve been unfair!), part of this overall score is likely driven by the Uniform Bar Exam. It used to be that there were more test-takers who’d passed a previous bar exam and would have to take another test in another jurisdiction. Those who’d already passed were likely to score quite well on a second attempt on a new bar. But the National Conference of Bar Examiners has indicated that the rise of the UBE has dropped the number of people taking a second bar, which in turns drops the number of high scorers, which in turn drops the MBE scores. So the drop in the MBE scores itself isn’t entirely a cause of alarm. It’s a reflection that the UBE is reducing the number of bar test-takers by some small figure each year.

We now know the mean scaled national February MBE score was 134.0, up 1.8 points from last year's 132.8. We would expect bar exam passing rates to rise in most jurisdictions. Just as repeaters caused most of the drop last time, they are causing most of the rise this time. Repeaters’ scores simply appear to be more volatile as a cohort of test takers.

A couple of visualizations are below, long-term and short-term trends.

For perspective, California's "cut score" is 144, Virginia's 140, Texas's 135, and New York's 133. The trend is more pronounced when looking at a more recent window of scores.

The first major drop in bar exam scores was revealed to law schools in late fall 2014. That means the 2014-2015 applicant cycle, to the extent schools took heed of the warning, was a time for them to improve the quality of their incoming classes, leading to some expected improvement for the class graduating in May of 2018. But bar pass rates were historically low in July 2018. It’s not clear that law schools have properly adapted even after five years.

Until then, we wait and see for the July 2019 exam. For more, see Karen Sloan over at NLJ.

The relationship between 1L class size and USNWR peer score

Professor Robert Jones has tracked the peer scores for law schools as reported by USNWR over time. That is, each year, each law school receives a peer score on a scale of 1 to 5, and we can see how that score has changed over time. He’s also tracked the average of all peer scores. That gives us an interesting idea—what do law professors think about the state of legal education as a whole? Perhaps schools are always getting better and we’d see peer scores climb; perhaps schools increasingly rate each other more harshly as they compete to climb the rankings and we’d see the peer scores drop. (There’s some evidence law faculties are pretty harsh about most law schools!)

At the risk of offering a spurious correlation, I noticed that the average score appeared to rise and fall with the conditions of the legal education market. The easiest way to track that would be to look at the overall 1L incoming class size the year the survey was circulated.

You can make a lot of correlations look good on two axes if you shape the two axes carefully enough. But there’s a good relationship between the two here. As the legal education market rose from 2006 to 2010 with increasingly large 1L class sizes, peer scores roughly trended upwards. As the market crashed through 2014, peer scores dropped. Now, the market has modestly improved—and peer scores have moved up much more quickly, perhaps reflecting optimism.

All this is really just speculation about why peer scores would change, on average, more than 0.1 in a single decade, or why they’d move up and down. Intuitively, the fact that peer scores may improve as the legal academy feels better about the state of legal education, or worsen as it feels worse, seems to make sense. There are far better ways to investigate this claim, but this relationship struck me as noteworthy!

Gaming out Hein citation metrics in a USNWR rankings system

Much has been said about the impending USNWR proposal to have a Hein-based citation ranking of law school faculties. I had some of my own thoughts here. But I wanted to focus on one of the most popular critiques, then move to gaming out how citations might look in a rankings system.

Many have noted how Hein might undercount individual faculty citations, either accidentally (e.g., misspellings of names) or intentionally (e.g., exclusion of many peer-reviewed journals from their database), along with intentional exclusion of certain faculty (e.g., excluding a very highly-cited legal research & writing faculty member who is not tenured or tenure-track).

The individual concerns are of concern, but not certainly for the reasons that they would tend to make the USNWR citation metrics less accurate. There are two reasons to be worried—non-random biases and law school administrative reactions—which I’ll get to in a moment.

Suppose USNWR said it was going to do a ranking of all faculty by mean and median height of tenured and tenure-track faculty. “But wait,” you might protest, “my faculty just hired a terrific 5’0” scholar!” or, “we have this terrific 6’4” clinician who isn’t tenure-track!” All true. But, doesn’t every faculty have those problems? If so, then the methodology doesn’t have a particular weakness in measuring all the law schools as a whole against one another. Emphasis on weaknesses related to individuals inside or outside the rankings ought to wash out across schools.

Ah, you say, but I have a different concern—that is, let’s say, our school has a high percentage of female faculty (much higher than the typical law school), and women tend to be shorter than men, so this ranking does skew against us. This is a real bias we should be concerned about.

So let’s focus on this first problem. Suppose your schools has a cohort of clinical or legal research & writing faculty on the tenure track, and they have lesser (if any) writing obligations; many schools do not have such faculty on the tenure track. Now we can identifying a problem in some schools suffering by the methodology of these rankings.

Another—suppose your schools had disproportionate numbers of faculty whose work appears in books or peer-reviewed journals that don’t appear in Hein. There might be good reasons for this. Or, there might not be. That’s another problem. But, again, just because one faculty member has a lot of peer-reviewed publications not in Hein doesn’t mean it’s a bad metric across schools. (Believe me, I feel it! My most-cited piece is a peer-reviewed piece, and I have several placements in peer-reviewed journals.)

Importantly, my colleague Professor Rob Anderson notes one virtue of the Sisk rankings that are not currently present in Hein citation counts: “The key here is ensuring that Hein and US News take into account citations TO interdisciplinary work FROM law reviews, not just citations TO law reviews FROM law reviews as it appears they might do. That would be too narrow. Sisk currently captures these interdisciplinary citations FROM law reviews, and it is important for Hein to do the same. The same applies to books.”

We simply don’t know (yet) whether these institutional biases exist or how they’ll play out. But I have a few preliminary graphics on this front.

It’s not clear how Hein will measure things. Sisk-Leiter measures things using a formula of 2*mean citations plus median citations. The USNWR metric may use mean citations plus median citations, plus publications. Who knows at this rate. (We’ll know in a few weeks!)

In the future, if (big if!) USNWR chooses to incorporate Hein citations into the rankings, it would likely do so by diminishing the value of the peer review score, which currently sits at a hefty 25% of the ranking and has been extraordinarily sticky. So it may be valuable to consider how citations relate to the peer score and what material differences we might observe.

Understanding that Sisk-Leiter is an approximation for Hein at this point, we can show the relationship between the top 70 or so schools in the Sisk-Leiter score (and a few schools we have estimates for at the bottom of the range), and the relationship of those schools to their peer scores.

This is a remarkably incomplete portrait for a few reasons, not the lease of which the trendline would change once we add 130 schools with scores lower than about 210 to the matrix. But very roughly we can see that the trends roughly correlate between peer score and Sisk-Leiter score, with a few outliers—those outperforming peer score via Sisk-Leiter above the line, those underperforming below the line.

But this is also an incomplete portrait for another reason. USNWR scales standardizes each score, which means they place the scores in relationship with one another before totalling them. That’s how they can add a figure like $80,000 of direct expenditures per student with a incoming class median LSAT score of 165. Done this way, we can see just how much impact changes (either natural improvement or attempts to manipulate the rankings) can have. This is emphatically the most important way to think about the change. Law school deans that see that citations are a part of the rankings and reorient themselves accordingly may well be chasing after the wind if costly school-specific changes have, at best, a marginal relationship to improving one’s overall USNWR score.

UPDATE: A careful and helpful reader pointed out that USNWR standardizes each score, but "rescales" only at the end. So the analysis below is simplified to take the standardized z-scores and rescaling them myself. This still allows us to make relative comparisons to each other, but it isn't the most precise way of thinking about the numerical impact at the end of the day. It makes it more readable--but less precise. Forgive me for my oversimplification and conflation!

Let’s take a look at how scaling currently works with the USNWR peer scores.

I took the peer review scores from the rankings released in March 2018 and scaled them on a 0-100 scale—the top score (4.8) became 100, and the bottom score (1.1) became 0.

As you can see, the scaling spreads out the schools a bit at the top, and it starts to compress them fairly significantly as we move down the list. I did a visualization of the distribution not long ago, but here you can see that an improvement in your peer score of 0.1 can nudge you up, but it won’t make up a lot of ground. That said, the schools are pretty well spread apart, and if you grinded it out, you could make some headway by climbing if you improved your peer score by 0.5 points—a nearly impossible feat. Coupled with the fact that this factor is a whopping 25% of the rankings, it offers opportunity if someone could figure out how to move. (Most schools just don’t move. The ones that do, tend to do so because of name changes.)

Now let’s compare that to a scaling of the Sisk-Leiter scores. I had to estimate the bottom 120 or so scores, distributing them down to a low end Sisk-Leiter score of 75. There’s a lot of guesswork, so this is extremely rough. (The schools around 70 or so have a Sisk-Leiter score of around 210.)

Yale’s 1474 becomes 100; the 75 I created for the bottom becomes 0. Note what happens here. Yale’s extraordinarily high citation count stretches out the field. That means lots of schools are crammed together near the bottom—consider Michigan (35), Northwestern (34), and Virginia (32). Two-thirds of schools are down in the bottom 10% of the scoring.

If you’re looking to gain ground in the USNWR rankings, and assuming the Hein citation rankings look anything like the Sisk-Leiter citation rankings, “gaming” the citation rankings is a terrible way of doing it. A score of about 215 puts you around 11; a score of around 350 puts you at 20. That’s the kind of dramatic movement of a 0.5 peer score improvement.

But let’s look at that 215 again. That’s about a 70 median citation count. Sisk-Leiter is over three years. So on a faculty of about 30, that’s about 700 faculty-wide citations per year. To get to 350, you’d need about a 90 median citation count, or increase to around 900 faculty-wide citations per year. Schools won’t be making marked improvements with the kinds of “gaming” strategies I outlined in an earlier post. They may do so through structural changes—hiring well-cited laterals and the like. But I am skeptical that any modest changes or gaming would have any impact.

There will undoubtedly be some modest advantages for schools that dramatically outperform their peer scores, and some modest injury for a few schools that dramatically underperform. But for most schools, the effect will be marginal at best.

That’s not do say that some schools will react inappropriately or with the wrong incentives to a new structure in the event that Hein citations are ultimately incorporated in the rankings.

But one more perspective. Let’s plug these Sisk-Leiter models into a USNWR model. Let’s suppose instead of peer scores being 25% of the rankings, peer scores become just 15% of the rankings and “citation rankings” become 10% of the rankings.

UPDATE: I have modified some of the figures in light of choosing to use the z-scores instead of adding the scaled components to each other.

When we do this, just about every school loses points relative to the peer score model—recall in the chart above, a lot of schools are bunched up in the 60-100 band of peer scores, but almost none are in the 60-100 band for Sisk-Leiter. Yale pushes all the schools downward in the citation rankings.

So, in a model of 15% peer score/10% citation rankings among the top 70 or so Sisk-Leiter schools, the typical school drops about 13 scaled points. That’s not important, however, for most of them—recall, they’re being compared to one another, so if most drop 17 points, then most should remain unchanged. And again, dropping 17 points is only 25% of the rankings—it’s really about a 4-point change in the overall rankings.

I then looked at the model to see which schools dropped 24 or more scaled points in this band (i.e., a material drop in score given that this only accounts for about 25% of the ranking): Boston College, Georgetown, Iowa, Michigan, Northwestern, Texas, Virginia, and Wisconsin. (More schools would likely fit this model once we had all 200 schools’ citation rankings. The University of Washington, for instance, would also probably be in this set.) But recall that for some of these schools—like Michigan and Northwestern—are already much higher than many other schools, so even dropping like this here would have little impact on the ordinal rankings of law school.

I then looked at the model to see which schools dropped 10 points or fewer (or gained!) (i.e., a material improvement in score): Chicago, Drexel, Florida International, George Mason, Harvard, Hofstra, Irvine, San Francisco, St. Thomas (MN), Toledo, and Yale. Recall again, Yale cannot improve beyond #1, and Harvard and Chicago are, again, so high in the rankings that marginal relative improvements in this area are likely not going to affect the ordinal rankings.

And all this means is that for the vast majority of schools, we’ll see little change—perhaps some randomness in rounding or year-to-year variations, but I don’t project for most schools much change at all.

Someone with more sophistication than I could then try to game out how these fit into the overall rankings. But that’s enough gaming for now. We’ll wait to see how the USNWR Hein citation figures come out this year, then we might play with the figures to see how they might affect the rankings.

(Note: to emphasize once again, I just use Sisk-Leiter. Hein will include, among other things, different citations, it may weigh differently thank Sisk-Leiter, it uses a different window, it may use different faculty, and the USNWR citation rankings may well include publications in addition to citations.)

MBE scores drop to 34-year low as bar pass rates decline again

On the heels of some good news in recent administrations of the July bar exam comes tough news from the National Conference of Bar Examiners: the Multistate Bar Exam (MBE) scores have dropped to a 34-year low, their lowest point since 1984.

For perspective, California's "cut score" is 144, Virginia 140, Texas 135, New York 133. A bar score of 139.5 is comparable to 2015 (139.9) in recent years. One would have to go back to the 80s to see comparable scores: 1982 (139.7), 1984 (139.2), & 1988 (139.8).

I’d hoped that perhaps qualifications of students have rebounded a bit as schools improved their incoming classes a few years ago; perhaps students are putting more effort into the bar than previous years; or other factors. That appears to not be the case this year.

That said, MBE scores may be slightly less predictive of what will happen with actual bar pass rates. the NCBE has pointed out that the rise of the Uniform Bar Exam has led to a number of test-takers transferring scores to new jurisdictions rather than taking a second jurisdiction’s bar—and, presumably, those who pass in one jurisdiction are much more likely to pass in another jurisdiction (accepting that cut scores can vary in some jurisdictions). The UBE points to a few thousand such transfers last year, at least some of whom may have taken the bar exam. But put against more than 40,000 MBE test-takers, the effect, while real, may be small.

Instead, we’re left to watch as results come in state by state. Tracking first-time pass rates (from jurisdictions that share them so far—ideally, ABA graduates would be a better measure, but this works reasonably well for now), declines have been pretty consistent: New Mexico (-14 points), Indiana (-3), North Carolina (+1), Oklahoma (-8), Missouri (-7), Iowa (-3), Washington (-3), and Florida (-4). But in many of these jurisdiction, pass rates were worse in, say, 2015 or 2016.

We’ll know more in the months to come, but it looks like another year of decline will cause some continued anguish in legal education. The increased quality of law school applicants this year will help the July 2021 bar exam look much better.

Note: I chose a non-zero Y-axis to show relative performance.

Federal Judicial Clerkship Report of Recent Law School Gradates, 2018 Edition

I've regularly posted judicial clerkship statistics on this blog. This year, I offer something slightly different: "Federal Judicial Clerkship Report of Recent Law School Gradates, 2018 Edition," a report I've posted on SSRN.

This Report offers an analysis of the overall hiring of recent law school graduates into federal judicial clerkships between 2015-2017 for each law school. It includes an overall hiring report, regional reports, overall hiring trends, an elite hiring report, and trends concerning judicial vacancies.

A preview of overall placement:

There's also been a decline in total law school federal clerkship placement, likely attributable in part to the rise in federal judicial vacancies:

For these and more, check out the Report!