Some thoughts about "bar exam federalism"

Professor Dan Rodriguez has some important thoughts over at PrawfsBlawg about the “high costs” of “bar federalism.” I had a few thoughts I wanted to add to his helpful perspective.

While most states have moved toward a delayed bar exam with an expanded “limited practice” model for would-be bar exam test-takers, Utah recently announced a “diploma privilege” model, where graduates of certain ABA-accredited law schools who have 360 hours (or about 9, 40-hour weeks) of supervised practice by the end of the year will be eligible for admissions without a bar exam. (This model, of course, is subject to public comment and review.)

Utah chooses not to limit the diploma privilege to its two in-state law school, Utah and BYU, which was a concern I raised earlier in considering the cohorts to be affected by delays to the bar exam. Instead, it extends to graduates of any ABA-accredited school with a 2019 cumulative bar passage rate of 86%, which was the State of Utah’s bar pass rate.

Professor Rodriguez laments that this standard might disproportionately adversely affect California’s law schools, because California has an unusually high cut score for its bar examination. That is, an 86% threshold doesn’t make a lot of sense when we consider the varying bar exams around the country—and that Wisconsin has two law schools with diploma privilege that would essentially automatically qualify. While I think his concerns are legitimate, I look at it from the opposite perspective (in a way that negates any concerns), and descriptively overstated.

That is, while Professor Rodriguez laments that many schools are left out of Utah’s proposal, I see Utah’s proposal as exceedingly generous, increasing the diploma privilege opportunity from two in-state schools to around 65 schools, about 1/3 of all ABA-accredited schools! I suppose it all depends on one’s perspective.

Now, a bit of a slightly unfair narrative here, so please bear with me—even indulge me. Utah’s bar exam statistics disclose very little. But Utah knows how many out-of-state law school grads take its bar exam each year. My back of the envelope calculations suggest at least 2/3 of test-takers are in-state. For a bar that has around 225 first-time test-takers in July from ABA-accredited schools, we are dealing with a very small pool of out-of-state test-takers in the first place—say, around 75, if not fewer.

Furthermore, we know that “national” schools, typically “selective” or “elite” schools, place graduates nationally. So if the Utah bar is concerned about a rule that would keep graduates of, say, schools from New Haven or Cambridge from returning to Utah, it needs a rule that would allow them, too. But not a rule based on something crass like USNWR rankings.

I don’t think an 86% bar passage rates is a great way of measuring schools with a sufficient quantity of “good” graduates such that the Utah state bar feels comfortable admitting them without an exam, but it has its virtues. For instance, every school in the Top 20 of the USNWR rankings makes the cut. Outside the top 20, only a few in, say, the top 45 miss the cut—Emory, UC-Irvine, UC-Davis, and the University of Washington, to name most if not all. And this is also a notable cut line given that both BYU and Utah are in the top 45 of the latest USNWR rankings. Again, crass, but roughly accurate.

If one considers the selectivity of the law school as both a proxy of the number of out-of-state bar exam test-takers, and the quality of the graduates, then the standard gives the benefit to a batch of other schools that fall outside the top 45 but, on the whole, gives an advantage to the very schools most likely to send grads to Utah and to pass the bar at the highest rates.

But, Professor Rodriguez wonders about a disproportionate impact on California schools. Dean Paul Caron, for instance, emphasizes that just four of California’s 21 ABA-accredited law schools would qualify.

Focusing on USNWR top-45 schools, UC-Davis saw just 12 of its 148 test-takers take a non-California bar in 2019—scattered across 5 jurisdictions, with an out-of-state first-time pass rate of 75% (and not reflective of California’s high cut score, it should be noted). UC-Irvine had just 11 of its 135 test-takers take a non-California bar in 2019—scattered across 7 jurisdictions, with an out-of-state first-time pass rate of 91%. (These are much lower out-of-state figures than either Emory or Washington.)

This is to emphasize an earlier point—the most “elite” or “selective” schools disproportionately place students in out-of-state bar exams. And California’s schools—even California’s very good schools like Davis and Irvine—place very few out of state. (It’s also, I think, a testament to law student choices of California schools as a greater commitment to remaning in in California.) And, I think it’s fair to assume, very very few into Utah.

Indeed, the Utah state bar knows as of April 1, its cutoff for this rule, which test-takers from which schools enrolled for the state’s bar. I would guess that it’s a fraction of its prospective test-takers who don’t make the cut.

That’s disappointing for them to be sure. But, again, I look at it from the perspective of allowing about 63 law schools to secure diploma privilege where I’d expect two. And while some out-of-state would-be test-takers are out of luck, so are repeaters for this administration of the exam—some of whom assuredly would have passed.

There are probably better rules to come up with, as Professor Rodriguez emphasizes. But they would be more complicated and be targeted at an increasingly small cohort of students. That isn’t to diminish the deep disappointment recent graduates of those excluded law schools must feel as they face a delay to their practice of law, and the need to take a bar exam when others don’t. But it’s to say that I think the impact is not only a generous one to the vast majority of law school graduates who’d take the Utah bar, but aslo adversely affects very few. Perhaps the Utah bar will disclose those figures in the weeks ahead.

Should law students publicize embarrassing or distasteful activities of their classmates that arise in the classroom?

The answer, in my view, is an obvious and resounding “no.” But a recent piece in an online subsidiary of the Graham Holdings Company suggests otherwise.

That piece (which I won’t link to) describes the actions of a law student at a selective law school in a Zoom classroom setting of a law school class. Some of his classmates found his behavior embarrassing or distasteful, described by some as “provocative” or “inappropriate.” The online piece included a screen shot of the student in the classroom, presumably captured and shared by a fellow classmate. Other classmates described this student's behavior and participation in other classes, editorializing their disapproval of his comments in other classes.

The students had enough self-awareness to speak anonymously, because their school “might punish them for revealing details of a class.”

In my view, this is not a close case. Students should not publicize the activities or comments about their fellow students in the classroom—even if they are embarrassing or distasteful.

First, while some comments or activities might be embarrassing or distasteful to some, they might not be to all. This then drives students to selectively capture and share embarrassing or distasteful remarks to a select audience to critique.

Granted, there is, I think, a distinction between activities in the classroom and comments in the classroom—the latter often being used to advance academic discourse. But this Graham Holdings piece made sure to include commentary about the student’s comments, too, with editorialized statements of disapproval.

Second, there are real questions about a student’s state of mind. What might be embarrassing or distasteful to some might simply be an accident, an oversight, or a mere lack of sensitivity. I can only think of the number of times I’ve used a word or phrase in the classroom intending no offense but adversely affecting some. Public shaming presumes culpability.

Third, there are particular concerns that arise in a virtual setting. Students inadvertently leave cameras or microphones on when engaging in any number of personal activities. Their cameras might capture the intimate contents of their bedroom. We should most strongly discourage publicizing what we perceive as embarrassing or distasteful activities in these circumstances.

Fourth, there’s a question of proper channels. If a student engages in threatening or harmful behavior, reports to the professor, the administration, and the police may be in order. If the activity is embarrassing or distasteful, a student ought to send an email or a text to the student in question with a remark along the lines of, “I don’t know if you know your camera is on, but it looks like you’re doing X, and it might not be the best thing to have on camera.”

Even assuming a student deliberately engages in provocative behavior, it’s hard for me to think of circumstances that would justify intruding upon the classroom to publicize it. Other students in the class may justifiably wonder if their activities—perhaps innocent, perhaps accidental, perhaps deliberate but in furtherance of academic discourage—would be later publicized for shaming. The slippery slope or the chilling effect is perhaps overused in legal circles. But I think it’s a justifiable concern here, where expectations of privacy in the classroom are particularly high.

Sadly, the salient feature for others in this encounter is one I deliberately haven’t mentioned until now. There is a politically-charged element to the activity, the commentary, and the reaction to the story. While that is assuredly the driving force behind the controversy at hand, I hope that the framing of my approach to this questions applies without regard to political valence. For others, the political valence is the justification or the excuse for the disclosure. For me, however, I can’t say that it is. Students simply shouldn’t publicize embarrassing or distasteful activities of their fellow students that arise in the classroom, even if they profoundly disagree on the politics.

When the Task Force on the New York Bar Examination plagiarizes your work without attribution

UPDATE: The chair of the Task Force reached out to me with apologies and intends to update the report with attribution. I’ll link to that updated report when it’s available.

My blog isn’t much. It makes no money. It garners little attention. I don’t earn money consulting from it. It contains my half-baked musings, the best of which might become an article, the worst of which I strike through and hope people forget.

But at the very least, it would be nice to see my work acknowledged if it’s useful.

Sadly, the Task Force on the New Your Bar Examination found my work useful, but chose to copy without attribution.

Its recent report on the state of the bar exam takes large chunks of my blog and treats it as its own work product. Several paragraphs are lifted from my 2015 post, “No, the MBE was not ‘harder’ than usual.”

Here’s a part of my post:

The MBE uses a process known as "equating," then "scales" the test. These are technical statistical measures, but here's what it's designed to do. (Let me introduce an important caveat here: the explanations are grossly oversimplified but contain the most basic explanations of measurement!)


Standardized testing needs a way of accounting for this. So it does something called equating. It uses versions of questions from previous administrations of the exam, known as "anchor" questions or "equators." It then uses these anchor questions to compare the two different groups. One can tell if the second group performed better, worse, or similarly on the anchor questions, which allows you to compare groups over time. It then examines how the second group did on the new questions. It can then better evaluate performance on those new questions by scaling the score based on the performance on the anchor questions.

This is from Page 46 of the Task Force report:

The MBE also uses a process known as “equating,” which “scales” the test to adjust for differences between exams and by different test takers over time. Equating uses versions of questions from previous administrations of the exam, known as “anchor” questions or “equators” to compare two different groups. This way, in theory, one can tell if the second group performed better, worse, or similarly on the anchor questions, which allows groups of test takers to be compared across test administrations. Then, how the second group did on the new questions is examined so that performance on the new questions can be evaluated based on performance on the anchor questions.

Here’s another part of my post:

Consider two groups of similarly-situated test-takers, Group A and Group B. They each achieve the same score, 15 correct, on a batch of "equators." But Group A scores 21 correct on the unique questions, while Group B scores just 17 right.

We can feel fairly confident that Groups A and B are of similar ability. That's because they achieved the same score on the anchor questions, the equators that help us compare groups across test administrations.

And we can also feel fairly confident that Group B had a harder test than Group A. (Subject to a caveat discussed later in this part.) That's because we would expect Group B's scores to look like Group A's scores because they are of a similar capability. Because Group B performed worse on unique questions, it looks like they received a harder batch of questions.

The solution? We scale the answers so that Group B's 17 correct answers look like Group A's 21 correct answers. That accounts for the harder questions. Bar pass rates between Group A and Group B should look the same.

In short, then, it's irrelevant if Group B's test is harder. We'll adjust the results because we have a mechanism designed to account for variances in the difficulty of the test. Group B's pass rate will match Group A's pass rate because the equators establish that they are of similar ability.

When someone criticizes the MBE as being "harder," in order for that statement to have any relevance, that person must mean that it is "harder" in a way that caused lower scores; that is not the case in typical equating and scaling, as demonstrated in this example.

Let's instead look at a new group, Group C.

On the unique questions, Group C did worse than Group A (16 right as opposed to 21 right), much like Group B (17 to 21). But on the equators, the measure for comparing performance across tests, Group C also performed worse, 13 right instead of Group A's 15.

We can feel fairly confident, then, that Group C is of lesser ability than Group A. Their performance on the equators shows as much.

That also suggests that when Group C performed worse on unique questions than Group A, it was not because the questions were harder; it was because they were of lesser ability.

This is from pages 46-47 of the report:

Consider two groups of similarly-situated test-takers, Group A and Group B. They each achieve the same score, 15 correct, on a set of the “equator” questions. But Group A scores 21 correct on the unique questions, while Group B scores just 17 of these questions right. Based on Groups A and B’s same score on the equator questions, we can feel fairly certain that Groups A and B are of similar ability. We can also feel fairly certain that Group B had a harder test than Group A. This is because we would expect Group B’s scores to look like Group A’s scores because they are of a similar capability. Because Group B performed worse on unique questions, it looks like they received a harder group of questions. Now we scale the answers so that Group B’s 17 correct answers look like Group A’s 21 correct answers, thus accounting for the harder questions. Bar pass rates between Group A and Group B should then look the same. In short, it is irrelevant if Group B’s test is harder because the results will be adjusted to account for variances in test difficulty. Group B’s pass rate will match Group A’s pass rate because the equators establish that they are of similar ability.

Now consider Group C. In the unique questions, Group C did worse than Group A (16 right as opposed to 21 right), much like Group B (17 to 21). But on the equators, the measure for comparing performance across tests, Group C also performed worse, 13 right instead of Group A’s 15. We can feel fairly certain, then, that Group C is of lesser ability than Group A. Their performance on the equators shows as much. That also suggests that when Group C performed worse on unique questions than Group A, it was not because the questions were harder; it was because they were of lesser ability.

I don’t have particular comments on the rest of the report. I just highlight that my work was copied but never cited. I’m glad someone found it a little helpful. I’d be more glad if there was attribution.

What law schools can learn from a disrupted Spring 2020 semester

In the middle of a disrupted semester, law schools (and higher education in general) are making significant accommodations for faculty and students. Tenure clocks are delayed for a year, student evaluations won’t be used to evaluate faculty, grading policies have been altered, exam formats may change, and the list goes on.

But it would be a mistake for schools to throw up their hands at the end of the semester and say that nothing can be learned from it! Indeed, a great deal can be learned from this sudden experiment—yes, with some limited value and all appropriate caveats, but there’s much to consider.

Here are a couple things that law schools should look closely at after the end of the semester—and, in some cases, schools might want to start thinking about how to evaluate these things now.

First, classroom experiences. While student evaluations shouldn’t be used to evaluate teaching performance (indeed, perhaps they should be of limited value in all circumstances!), they can tell us a lot about the classroom experience of students as professors abruptly switched to online formats under a variety of approaches. Schools can look and see what might have worked or didn’t work. Synchronous or asynchronous? Traditional material or guest speakers?

Schools should comb through the student evaluations open-response components to figure out if any online practices were particularly successful—or particularly unsuccessful. Evaluations might include specific questions about the online components. Or they could be an entirely separate questionnaire submitted to students. Institutions should solicit from faculty their online pedagogy and see what can be gleaned from those practices and student reactions. This is particularly true given the Socratic-based casebook methods used in most law schools and particularly in most first-year or core curriculum.

Second, grading policies. Law schools around the country have implemented different grading policies in light of coronavirus-related disruption. Much is in dispute. Much, I think, is speculative. But whatever policy a school adopts, a school ought to examine whether the benefits or costs of the policy came to pass.

For schools that kept their grading as is: were second semester 1L grades as highly correlated with first semester as in previous 1L years? If they were less correlated, did it affect certain racial, gender, or socioeconomic groups more than others?

For schools that switched to optional pass-fail: did some students disproportionately take advantage of the pass-fail option—based on GPA quartile, demographics, etc.? Did students who remain with grades benefit from higher grades as pass-fail students worked "less," if all were curved together?

For schools that went to mandatory pass-fail: did students with better "resume bias" (e.g., elite undergraduate institutions) have a disproportionately better OCI experience than in previous years? Did bar passage rates worsen compared to graduates of other schools? [Of course, the economic downturn and alterations to the bar exam could affect this.] Did academic dismissal rates change?

These are just a few ideas to try to measure the effects of changes to policies and look for good or bad signs.

*

I’m sure others will think of important things for law schools to reconsider—exam policies, attendance rules, faculty committees and meeting rules, “work from home” alternatives for faculty and staff, wet ink signature requirements, and so on. But I want to emphasize that schools should look to learn the right kinds of lessons from this disrupted semester.

My best advice for teaching in an online classroom? Ask for student feedback

I’ve seen a lot of law professors engage with one another on blogs or Twitter, swapping advice about tips and tricks for teaching with Zoom, teaching in a remote environment, and so on. I wanted to contribute my best, small piece of advice to that discussion: ask for student feedback.

The student’s (or “user’s”) experience may look different from the professor’s experience. Small details about sound settings, lighting, or screen sharing may look quite different to the students. Students may have anxiety about details like class discussion format or technology that professors may either not be aware of or simply fail to raise with students.

The best thing I’ve done so far is to solicit feedback from students, both before I started teaching in a virtual classroom and after. In the before, I discovered their concerns and anxieties and could try to address them as best I could before our first class—address them in providing them transparency and guidance, and to be aware as we started our class together. After the first week of remote teaching, I did the same so I can tweak details like how my PowerPoint slides display on the shared screen and so on.

It wasn’t much to solicit feedback—an anonymous Google form with a single open-ended prompt in both cases.

But I think that it’s probably one of the most effective ways to figure out what students concerns are and to address them in the specific class environment, adapted to the specific professor’s teaching style.

We’ll see how the remainder of the semester goes, but it’s brought my attention to matters I wouldn’t otherwise have considered, and I hope it improves the experience for students in the weeks ahead.

Law school work product as a substitute for bar exams

Yesterday, I offered some thoughts about the bar exam and the potential solutions to delays caused by Covid-19. Some solutions, like “emergency diploma privilege,” primarily focus on in-state graduates of the Class of 2020 of ABA-accredited law schools taking the bar for the first time. I mused that there are several cohorts of bar exam test-takers that state bar licensing authorities must consider when adjusting their bar exam policies.

One idea I’ve mulled over is, admittedly, an imperfect and half-baked idea, and one that’s labor intensive—but one that opens up opportunities for essentially all comers (with some caveats) to get admitted to a jurisdiction.

In short: state bars could review portfolios of prospective attorneys’ work product (from law school, supervised practice, or actual practice) and determine whether the candidate meets the “minimum standards” to practice in the jurisdiction.

Okay, in length: this isn’t easy. But graduates of all ABA-accredited law schools are required to take a legal writing course and an upper-division writing course. Written work product is assuredly something that comes out of every law school.

Students also commonly take a core of courses that are on the bar exam and take written exams in those courses. They write exams examining issues of Civil Procedure, Evidence, Contracts, and Property.

Law school graduates also spend their summers—or some semesters—working with legal employers and developing written work product. They work in clinics or externships during the semester, and they write memos or other work product.

State licensing authorities, then, could require prospective applicants to develop portfolios of written work product. State bar graders could then go through the standard “calibration” exercises they usually do to “grade” written work product. Multiple “graders” would look at each component of the work product.

Now, there are huge problems with this, I know, and I’ll start with a few. First and foremost is the lack of standardization. Not everyone does the same writing assignments in law school, much less the same types of writing assignments. Exam essays typically lack context (e.g., they don’t have a “statement of facts” like a typical legal writing assignment would). Not everyone spends a summer in the practice of law (e.g., in finance) or has written work product (e.g., an externship with a judge that doesn’t want disclosure of work product). There’s no scaling and equating like one has with the bar exam to improve reliability. Grading would take quite some time.

In the alternative, state licensing authorities could authorize “supervised practice” (Proposal 6 in the working paper on bar exam alternatives) and use the work product from that supervised practice later to submit to the licensing authority to supplement the law school work product.

But an advantage of this proposal, I think, is that the written product is what we’d expect of attorneys and a good measure of their ability. Law school grades (i.e., mostly the assessment of written work product) strongly correlate with bar exam scores and bar exam success. It would extend to in-state or out-of-state, to ABA-accredited graduates or others, to foreign-trained attorneys, or to licensed practitioners in other states. It could even apply to those who’ve failed the bar exam before—if they’re judged on their work to be ”minimally qualified,” all the better.

I toss it out as one possible solution that requires little additional or new work on the part of prospective applicants to the bar, that judges them on something relevant to their ability to engage in legal analysis, and that mitigates concerns around different cohorts of applicants to the bar.

Maybe it’s too much work, the disparities in the types of work product too vast, for us to consider. At the same time, federal judges commonly review clerkship applicants on an open-ended consideration of written work product. Perhaps there’s something to be said for looking at past written work product.

Some thoughts on the bar exam and Covid-19

A helpful and timely working paper from several law professors—including authors whose work I’ve admired in the past like Professor Deborah Jones Merritt, Professor Joan Howarth, and Professor Marsha Griggs—offers much to consider about the bar exam in light of the coronavirus pandemic and the spread of the illness Covid-19. That is, the coronavirus outbreak may persist into July and call for rethinking how to address the bar exam. They’ve done a tremendous job in a short period of time thinking about it and writing about it.

I wanted to address the question slightly differently from the framing in their paper, however. The framing in their paper is as follows:

At the same time, it is essential to continue licensing new lawyers. Each year, more than 24,000 graduates of ABA-accredited law schools begin jobs that require bar admission. The legal system depends on this yearly influx to maintain client service.

These solutions, then, are oriented toward looking at the Class of 2020 and how this cohort of attorneys can be licensed. To be sure, this is how the bulk of new lawyers are added to the legal profession each year; this is a pressing concern for law schools, whose graduates are placed into a precarious position; and this is assuredly the focus of state licensing boards.

But looking at the position slightly differently can present a very different picture: instead of looking at the Class of 2020 graduates of ABA-accredited schools taking the bar exam, one might look at the administration of the bar exam. I think this yields some contrasts in the scope of their proposals.

The authors of the paper nicely identify six alternatives, the first three “likely to fail,” the last three with “considerable promise,” and perhaps jurisdiction-specific solutions mean some apply in some places but others in others:

  1. Postponement

  2. Online exam

  3. Exams administered in small groups

  4. Emergency diploma privilege

  5. Emergency diploma privilege-plus

  6. Supervised practice

I’ll come to some details of these proposals in a moment, but the bulk of them are in the paper. At the same time, I want to focus on several population who take the bar exam in a given year:

Cohort A. JD graduates of an ABA-accredited law school from that state: This is probably the largest contingent of bar test-takers, although many take the test out of state.

Cohort B. JD graduates of an ABA-accredited law school from out of state. Some law schools like Yale predominantly place graduates out of state. Virtually every law school sends at least some students to take another state’s bar exam.

Cohort C. LLM graduates of an ABA-accredited law school from that state. While JD graduates are the vast majority of graduates each year, foreign-trained lawyers commonly earn an LLM in the United States to enable them to take the bar exam and practice in the United States.

Cohort D. LLM graduates of an ABA-accredited law school from out of state. Given that New York and California are popular destinations for most foreign-trained attorneys, LLM graduates in other states often head to those states to take the bar.

Cohort E. Graduates of non-ABA-accredited law schools. While these are far rarer, in states like California graduates of state-accredited schools can take that state’s bar exam.

Cohort F. Test-takers who failed a bar exam previously. A significant number of retakers make up the bar exam test-taking cohort each year.

Cohort G. Attorneys admitted in other jurisdictions taking the bar. While reciprocity exists in some states, it doesn’t in others, and attorneys sometimes have to take a bar exam to get admitted to that jurisdiction.

(Maybe you can think of other groups. Let me know!)

So, the bar exam is being administered to these sets of test-takers.

Cohorts A through F are all “new” lawyers in the United States; Cohort G includes those who are already practitioners elsewhere (or perhaps let their license expire elsewhere).

Proposals 1 (Postponement), 2 (Online exam), and 3 (Exams administered in small groups) would apply to all seven of these cohorts. But, I think, as the authors of the paper note, these seem less likely options. Particularly Proposal 1—it’s not clear when this pandemic will end, and states have to act uniformly to take advantage of the uniform bar exam or the MBE. And regarding Proposals 2 & 3, feasibility might be possible if aggressive measures were pursued.

Proposal 4 (Emergency Diploma Privilege) offers strong benefits for Cohort A. Undoubtedly, recent law school graduates would not have to study for the summer bar exam; they would not need to spend money on bar prep courses; they would be guaranteed to be admitted to practice (subject, of course, to character & fitness reviews, and passing the MPRE).

That said, I would take some issue with the comparison to Wisconsin—yes, Wisconsin has had diploma privilege. But, (1) the diploma privilege mandates an extensive required curriculum, (2) Wisconsin’s cut score for the bar exam is the lowest in the United States, and (3) the state has just two law schools, Wisconsin and Marquette, and about 75 law schools have worse median LSAT profiles, and about 60 law schools have worst 25th LSAT profiles, among their incoming classes than Marquette. In other words, all other states have higher bar exam standards, many have graduating students with materially lower predictors of bar passage, and no states require the kinds of core curriculum of Wisconsin.

But setting all those aside, it is an emergency situation (and perhaps Proposal 5 can help take care of some of this), and we shouldn’t expect outcomes like those in careful set-ups like Wisconsin. But note that this only benefits Cohort A. Cohort B (out of staters) would not benefit, unless the states began instituting some reciprocity of diploma privilege as the paper suggests as a possibility. It’s not clear that LLM graduates would benefit (in Wisconsin, for instance, they can’t—it applies only to “84 semester credit” degrees, i.e., the JD). The paper’s proposal extends only to ABA-accredited schools and first-time test-takers: “solely to graduates of the class of 2020 (including those who graduated in December 2019) from accredited law schools. Individuals who had previously taken and failed a bar examination in any state could be excluded.” (Emphasis added.) And it doesn’t help Cohort G, those trying to get into the bar.

Now, it might be that Proposal 4 is still a good proposal and needs to be supplemented with other proposals (say, Proposal 3 now that the test-taking cohort is much smaller). But it’s to emphasize that bar exam solutions focusing on recent graduates may miss significant other cohorts seeking admission to the bar.

Proposal 5 adds to Proposal 4—requiring some “bridge the gap” programs, CLE requirements, CALI lessons, or the like. It would add complexity and help overcome some of the concerns of Proposal 4—that is, given that Wisconsin has a bar that requires greater supervision on the law school end, maybe other states could require greater supervision on the back end.

Proposal 6 would allow supervised practice, with a supervisor who would advice them and, upon completion of 240 hours’ of work (e.g., 6, 40-hour weeks), graduates could be admitted to that bar. This helps extend to Cohort B: “Notably, this option would allow jurisdictions to license lawyers graduating from law schools in any state.” Again, however, the proposal has some limitations, extending to “2020 graduates of accredited law schools.”

These last three proposals can help Cohort A. They could, in some circumstances, help Cohort B.

But it’s not clear that they would necessarily help others. It could be, I suppose, that a bar might loosen its reciprocity rules under Cohort G for those who registered to take the bar. Or it might extend some of them to non-ABA-accredited graduates.

It’s particularly worth considering, however, what to do with everyone else. That is, these programs might help recent graduates. But some people will still want to take the bar! Should states just cancel the bar? Those who failed before can’t take it? Should they try one of the first three proposals for other cohorts?

It’s not clear to me what the best approach is. The bar exam affects far more than recent law school graduates, although law school educators (including me!) are particularly concerned with this cohort. The state bar is going to have to determine how to handle all of these cohorts who might be affected if Covid-19 restrictions extend into July.

There are no easy answers. I appreciate the authors of this study for putting such clear and helpful options on the table. I imagine state bars around the country are considering the appropriate paths to take. I look forward to seeing more such discussions play out in the weeks ahead, and I hope state bars can come up with solutions that best help the legal system and all prospective test-takers.

Most law schools have become more affordable in the last three years, 2019 edition

Six years ago, I noted that around 30 law schools had become “more affordable” over the a three-year period. Three years ago, I noted that most law schools had become more affordable. In the last three years, law schools have continued to become more affordable—at least, in the measure of student debt.

USNWR reports average indebtedness at graduation among law school graduates and the percentage who took out loans. (Go there to see the highs and the lows.)

I removed all schools that failed to disclose debt figures for either 2016 graduates or 2019 graduates. (I had a partial data set from 2016, apologies!) That brought us to 150 schools.

Some schools are unable to read the USNWR forms correctly and only report some of the debt one year and the cumulative debt another year; I don't attempt to determine which schools made that error, but schools appear better at reporting data over the years.

I calculated 6.5% inflation between the Class of 2016 and the Class of 2019, and adjusted the 2016 figures accordingly. (Inflation adjustment comes with its own controversial choices, to be sure!) The debt figures listed on USNWR are an average for those who incurred debt; to arrive at a more accurate picture of the debt load of the class as a whole, I then factored in the percentage of students who graduated without any debt to reach an overall average.

Among the 150 schools, 120 saw a decline in overall debt loads; just 30 saw an inflation-adjusted increase.

Many possible reasons for the changes are possible. As I explained in 2016, students may graduate without debt for many reasons: "That could be because they are independently wealthy or come from a wealthy family willing to finance the education; they could have substantial scholarship assistance; they could earn income during school or during the summers; they could live in a low cost-of-living area, or live frugally; or some combination of these and other factors. It's worth noting that several thousand students graduate each year without any debt."

Scholarship awards appear to be outpacing tuition hikes—which has been a several-year trend and places schools in increasingly precarious financial positions. Students are no longer purchasing health care due to the ability to remain on their parents' health insurance under federal law, a significant cost for students a few years ago. Schools have increasingly eased, or abolished, stipulations on scholarships, which means students graduate with less debt. Some schools have slashed tuition prices. We might simply be experiencing the decline of economically poorer law students, resulting in more students who need smaller student loans—or none at all. Students may be taking advantage of accelerated programs that allow them to graduate faster with less debt (but there are few such programs). Finally, as JD class sizes shrink, it's increasingly apparent that students who would have paid the "sticker" price as increasingly pursuing options at institutions that offer them tuition discounts.

Additionally, as I've noted before, the "percentage may be somewhat deceptive, because at a very low-cost school, a modest increase in debt load may appear, on a percentage basis, much higher than comparable increase at a high-cost school. A $10,000 increase in debt at a school that previously had just $20,000 in debt looks like 50%; at a school with $100,000 in debt, just 10%. But I thought percentage would still be the most useful."

And of course, these debt figures are only an average; they do not include undergraduate debt, credit card debt, or interest accrued on law school loans while in school. And, as I've written, "The averages are not precise, either, for individuals. The average may be artificially high if a few students took out extremely high debt loads that distorted the average, or artificially low if a few students took out nominal debt loads that distorted the average."

It's worth noting that some of these changes are hardly random.

Major announcements from institutions like Iowa, Arizona, and Chicago back in 2013 signaled major changes in tuition or scholarship structures in 2016; those schools led the reduction in debt for the Class of 2016. Those reductions remained largely steady for the Class of 2019 for Iowa and Arizona, but Chicago saw a fairly sizeable increase in debt loads.

Similarly, announcements from Tulsa, George Mason, Texas A&M, and Wayne State on slashing tuition or major scholarship programs turned into significant reductions in student debt loads.

Finally—and while it should go without saying, I fear I need to say it anyway—this is hardly a statement about whether any particular law school is a "good" value or whether the debt loads are appropriate. It's simply a relative comparison of debt loads over three years.

Inflation-Adjusted Average Law School Debt Incurred by All Law Students Between 2016 & 2019
School 2016 2019 Dollar diff Pct diff
University of Tulsa $94,834 $40,340 -$54,494 -57.5%
Northeastern University $92,739 $45,714 -$47,025 -50.7%
Ohio Northern University (Pettit) $99,056 $52,743 -$46,313 -46.8%
University of Detroit Mercy $90,919 $50,769 -$40,149 -44.2%
George Mason University $79,264 $45,946 -$33,319 -42.0%
Texas A&M University $99,638 $58,396 -$41,242 -41.4%
University of Missouri $68,569 $43,423 -$25,146 -36.7%
Elon University $143,573 $91,630 -$51,943 -36.2%
Wayne State University $64,458 $41,659 -$22,799 -35.4%
University of Kansas $72,021 $48,728 -$23,293 -32.3%
University of Wyoming $77,451 $52,565 -$24,886 -32.1%
University of Arkansas--Fayetteville $58,285 $40,030 -$18,255 -31.3%
Indiana University--Indianapolis (McKinney) $102,264 $70,370 -$31,894 -31.2%
University of Cincinnati $67,928 $46,985 -$20,943 -30.8%
Texas Tech University $72,171 $50,692 -$21,478 -29.8%
University of New Hampshire $79,842 $56,719 -$23,123 -29.0%
Florida State University $72,692 $52,888 -$19,804 -27.2%
New York Law School $136,346 $100,312 -$36,033 -26.4%
Western State College of Law at Westcliff University $101,993 $75,454 -$26,539 -26.0%
University of Richmond $93,356 $69,776 -$23,580 -25.3%
Drexel University (Kline) $86,604 $64,862 -$21,742 -25.1%
University of Louisville (Brandeis) $84,498 $63,427 -$21,071 -24.9%
Regent University $112,752 $85,343 -$27,409 -24.3%
Pace University (Haub) $106,847 $81,061 -$25,786 -24.1%
University of Toledo $76,705 $58,258 -$18,447 -24.0%
Illinois Institute of Technology (Chicago-Kent) $89,096 $68,387 -$20,709 -23.2%
University of Minnesota $78,017 $59,947 -$18,070 -23.2%
University of Tennessee--Knoxville $68,864 $53,255 -$15,610 -22.7%
University of Alabama $55,130 $43,057 -$12,073 -21.9%
Emory University $94,348 $73,766 -$20,582 -21.8%
Southern Illinois University--Carbondale $78,174 $61,142 -$17,032 -21.8%
University of North Carolina--Chapel Hill $75,952 $59,444 -$16,508 -21.7%
University of Dayton $98,846 $77,929 -$20,917 -21.2%
University of California (Hastings) $121,322 $96,303 -$25,019 -20.6%
University of Akron $75,190 $59,816 -$15,374 -20.4%
University of Nevada--Las Vegas $75,979 $60,637 -$15,342 -20.2%
Marquette University $130,402 $104,256 -$26,146 -20.1%
University of Georgia $69,415 $55,724 -$13,690 -19.7%
University of California--Davis $77,712 $62,486 -$15,226 -19.6%
University of Michigan--Ann Arbor $113,064 $91,026 -$22,038 -19.5%
Washburn University $66,857 $53,847 -$13,010 -19.5%
St. John's University $95,450 $76,945 -$18,505 -19.4%
Liberty University $59,671 $48,107 -$11,565 -19.4%
University of Pittsburgh $89,232 $72,596 -$16,636 -18.6%
Samford University (Cumberland) $108,188 $88,037 -$20,151 -18.6%
Suffolk University $106,737 $87,090 -$19,648 -18.4%
Northwestern University (Pritzker) $107,778 $88,138 -$19,640 -18.2%
Albany Law School $89,205 $72,957 -$16,248 -18.2%
University of Colorado--Boulder $81,575 $66,766 -$14,809 -18.2%
University of Oklahoma $67,108 $54,938 -$12,171 -18.1%
University of Miami $108,997 $89,275 -$19,722 -18.1%
Villanova University $75,422 $61,872 -$13,549 -18.0%
University of Illinois--Urbana-Champaign $76,180 $62,659 -$13,521 -17.7%
DePaul University $111,743 $91,990 -$19,754 -17.7%
Willamette University College of Law $131,498 $108,357 -$23,141 -17.6%
University of St. Thomas $78,999 $65,504 -$13,495 -17.1%
Baylor University $113,629 $94,415 -$19,214 -16.9%
Brooklyn Law School $84,611 $70,414 -$14,196 -16.8%
Stanford University $109,728 $91,379 -$18,349 -16.7%
Georgetown University $131,170 $109,668 -$21,502 -16.4%
Louisiana State University--Baton Rouge (Hebert) $68,023 $56,878 -$11,145 -16.4%
University of Montana $71,101 $59,526 -$11,576 -16.3%
Pepperdine University (Caruso) $126,341 $106,229 -$20,112 -15.9%
Golden Gate University $150,786 $126,974 -$23,812 -15.8%
Boston College $80,194 $68,029 -$12,166 -15.2%
Roger Williams University $116,497 $99,060 -$17,437 -15.0%
University of Maryland (Carey) $88,863 $75,764 -$13,099 -14.7%
Florida International University $87,527 $74,927 -$12,600 -14.4%
Fordham University $94,529 $81,126 -$13,403 -14.2%
Washington and Lee University $90,547 $78,408 -$12,139 -13.4%
University of Missouri--Kansas City $91,276 $79,055 -$12,221 -13.4%
University of Arkansas--Little Rock (Bowen) $55,520 $48,139 -$7,381 -13.3%
Quinnipiac University $92,334 $80,074 -$12,261 -13.3%
University of Utah (Quinney) $81,371 $70,607 -$10,763 -13.2%
University of Southern California (Gould) $103,426 $91,037 -$12,389 -12.0%
University of South Dakota $58,014 $51,107 -$6,906 -11.9%
University of Maine $74,405 $65,690 -$8,715 -11.7%
Wake Forest University $84,549 $74,714 -$9,835 -11.6%
Indiana University--Bloomington (Maurer) $78,538 $69,410 -$9,128 -11.6%
University of the Pacific (McGeorge) $135,007 $119,595 -$15,412 -11.4%
CUNY $64,328 $56,992 -$7,337 -11.4%
Duquesne University $98,700 $87,572 -$11,129 -11.3%
Seattle University $124,338 $110,377 -$13,961 -11.2%
Northern Illinois University $77,824 $69,191 -$8,633 -11.1%
University of Wisconsin--Madison $58,933 $52,502 -$6,432 -10.9%
University of Texas--Austin $73,528 $65,513 -$8,014 -10.9%
Mississippi College $103,608 $92,948 -$10,660 -10.3%
Drake University $105,759 $95,067 -$10,692 -10.1%
Yeshiva University (Cardozo) $81,204 $73,348 -$7,856 -9.7%
University of Mississippi $53,796 $48,644 -$5,151 -9.6%
University of Baltimore $94,694 $85,627 -$9,067 -9.6%
University of Pennsylvania (Carey) $118,391 $107,516 -$10,876 -9.2%
University of San Francisco $145,407 $132,305 -$13,101 -9.0%
University of North Dakota $48,761 $44,488 -$4,274 -8.8%
University of Nebraska--Lincoln $48,245 $44,066 -$4,180 -8.7%
University of Virginia $111,177 $102,309 -$8,868 -8.0%
University of Denver (Sturm) $129,882 $119,929 -$9,953 -7.7%
University at Buffalo--SUNY $79,323 $73,514 -$5,808 -7.3%
University of California--Berkeley $111,367 $103,316 -$8,051 -7.2%
Cornell University $109,464 $101,769 -$7,695 -7.0%
Duke University $105,132 $97,766 -$7,366 -7.0%
University of Notre Dame $99,175 $92,261 -$6,914 -7.0%
Harvard University $125,210 $117,278 -$7,933 -6.3%
Creighton University $111,743 $104,672 -$7,071 -6.3%
University of California--Los Angeles $92,890 $87,053 -$5,837 -6.3%
Lewis & Clark College (Northwestern) $115,655 $108,659 -$6,996 -6.0%
New York University $113,752 $107,997 -$5,755 -5.1%
Nova Southeastern University (Broad) $136,978 $130,501 -$6,477 -4.7%
Brigham Young University (Clark) $42,862 $41,168 -$1,694 -4.0%
University of Arizona (Rogers) $55,949 $53,878 -$2,071 -3.7%
Santa Clara University $113,377 $109,363 -$4,014 -3.5%
Ohio State University (Moritz) $72,098 $69,938 -$2,160 -3.0%
Temple University (Beasley) $69,212 $67,282 -$1,930 -2.8%
University of Illinois--Chicago (John Marshall) $141,204 $137,338 -$3,866 -2.7%
Vanderbilt University $87,247 $85,179 -$2,068 -2.4%
West Virginia University $68,747 $67,226 -$1,520 -2.2%
Tulane University $106,019 $103,693 -$2,326 -2.2%
American University (Washington) $127,674 $125,256 -$2,418 -1.9%
Loyola Marymount University $113,944 $113,036 -$909 -0.8%
University of South Carolina $76,948 $76,898 -$50 -0.1%
University of Houston $67,319 $67,467 $149 0.2%
Hofstra University (Deane) $117,074 $117,624 $549 0.5%
California Western School of Law $139,637 $140,401 $764 0.5%
University of Florida (Levin) $62,549 $63,195 $645 1.0%
University of Iowa $55,262 $56,082 $819 1.5%
St. Mary's University $115,854 $117,939 $2,085 1.8%
Boston University $74,210 $76,152 $1,942 2.6%
Stetson University $109,562 $112,575 $3,013 2.8%
Columbia University $108,041 $112,226 $4,185 3.9%
Seton Hall University $76,352 $80,080 $3,728 4.9%
University of California--Irvine $83,373 $87,917 $4,545 5.5%
Gonzaga University $92,832 $98,372 $5,541 6.0%
Southern Methodist University (Dedman) $90,731 $96,344 $5,614 6.2%
Yale University $88,832 $94,334 $5,502 6.2%
The Catholic University of America $102,315 $109,379 $7,064 6.9%
University of Massachusetts--Dartmouth $87,623 $95,554 $7,931 9.1%
Charleston School of Law $117,018 $128,379 $11,361 9.7%
George Washington University $93,366 $104,642 $11,276 12.1%
Washington University in St. Louis $57,885 $65,834 $7,949 13.7%
University of Memphis (Humphreys) $62,321 $71,139 $8,818 14.1%
University of Connecticut $52,844 $62,206 $9,362 17.7%
Oklahoma City University $88,184 $103,827 $15,643 17.7%
University of San Diego $91,396 $108,298 $16,902 18.5%
Ave Maria School of Law $114,409 $136,034 $21,625 18.9%
University of Chicago $89,043 $107,795 $18,751 21.1%
Florida Coastal School of Law $118,266 $147,238 $28,971 24.5%
Campbell University $115,833 $162,478 $46,645 40.3%
University of Idaho $62,984 $91,180 $28,196 44.8%
University of Kentucky $44,578 $65,102 $20,525 46.0%
North Carolina Central University $58,588 $100,022 $41,435 70.7%

The absurd volatility of USNWR specialty law rankings

Last year, USNWR dramatically expanded the scope of its specialty law rankings. It went from a handful of schools ranked in categories like clinical training and health care law, to virtually all law schools. I noted last year how this might prompt a new arms race among law schools. This year, USNWR added four categories: “Business-Corporate Law,” “Contracts-Commercial Law,” “Criminal Law,” and “Constitutional Law.”

Each of the ~200 law schools is asked to list one faculty contact in each area. Those faculty are then given a survey and asked to rank schools on a scale of 1 to 5, or not answer if they don’t have enough information about schools. There is comical compression outside the top handful of schools in each category, which is why USNWR typically wouldn’t go deep into the rankings. Nevertheless, beginning last year, it started publishing rankings of all schools that received at least 10 responses.

Response rates for surveys range from around 45% to 60%, so these are basically surveys of about 100 law professors—professors identified by their deans or some administrator as being the relevant contact in the field.

To show how precarious the surveys are, many schools probably receive far fewer than 100 professors’ evaluations, and many are close to the 10-voter threshold.

I just picked the “Tax Law” category at random among areas surveyed in both 2018 (for the 2020 rankings) and 2019 (for the 2021 rankings).

Here are schools ranked in 2020 (with their peer score averages on a scale of 1-5) that were unranked in 2021:

Barry 1.0
St. Thomas (Florida) 1.2

And here are schools ranked in 2021 (with their peer score averages on a scale of 1-5) that were unranked in 2020:

Ave Maria 1.0
Belmont 1.1
Concordia 1.0*
Drake 1.3
Duquesne 1.5
Faulkner 1.0
Lincoln Memorial 1.0*
North Carolina Central 1.1
Northern Kentucky 1.1
Regent 1.1
Memphis 1.2
Western New England 1.1

(*Two schools were added to surveys after receiving ABA accreditation.)

It’s worth considering that these schools each year probably received barely more than 10 votes, even if the response rate was purportedly among more than 100 tax law professors. It’s questionable how worthwhile much of this survey can be among schools that receive an average below 2.0 if the response rates are, in reality, so pitifully low.

Another is volatility. With such a small sample, we should expect high degrees of volatility from year to year, which tells us nothing about actual changes in reputation. Consider a few:

South Carolina: 2.6 (+0.6)
Loyola Chicago: 2.7 (+0.4)
Albany: 1.8 (+0.4)
Cornell: 2.9 (+0.4)
Ohio State: 3.1 (+0.4)
Baltimore: 2.3 (+0.4)
Cincinnati: 2.4 (+0.4)
Richmond: 2.6 (+0.4)
Washington University: 3.3 (+0.4)
-
Suffolk: 1.5 (-0.4)
Wyoming: 1.5 (-0.4)
Vermont: 1.3 (-0.4)
New England Law: 1.2 (-0.4)
Montana: 2.0 (-0.5)

To give you some perspective, in the entire history of the USNWR peer reputation scores, there has been exactly one instance of a score rising by 0.4 (and never more); and exactly three instances of a score lowering by 0.4 (never more), all directly tied to scandals at the institution.

This reflects absurd year-over-year volatility in these “rankings.” It’s could be partly a case of a faculty member on that school’s faculty who did, or didn’t, fill out the survey in one year as opposed to the other (i.e., giving one’s home institution a “5” in one year and then not voting the other year).

I checked Brian Leiter’s report of lateral faculty moves last year, with only one reported tax move (from George Washington to Florida). That is, there’s nothing obvious to suggest that anything has materially changed at any of these institutions (although someone might point some detail out to me).

These comically bad surveys, however, will continue to receive outsized advertising from law schools and be given outsized weight from prospective law students. I simply highlight the absurdity here.