Experimentation in reforming legal education
Professor Dan Rodriguez has a terrific and helpful post over at Legal Evolution, Toward evidence-based legal education reform: First, let’s experiment. This comes on the heels of his call for more data to help improve law school decisionmaking.
“Data-driven” is one of the trendiest buzzwords around at the moment, but he points out that we too easily assume the status quo is the most effective form of legal education or that we can’t figure out if it or another form is any good. We need evidence—data, yes, but really means of comparing different kinds of legal education and ascertaining whether one is better than another. I think Professor Rodriguez rightly notes that “internal political difficulty” tends to inhibit experimentation in legal education to a greater degree than accreditation bodies or rankings factors—that is, there’s plenty of flexibility within existing accreditation frameworks that minimally impact USNWR rankings factors, but it’s simply a question of will, desire, priorities, and the like.
One is that this language sounds so scientific, and it may lead to concerns about institutional review board reviews and the like. But as an exchange on Twitter recently illuminated, labeling them “pilot programs” over experimentation or other overly scientific-sounding phrases may help ease some political concerns.
Additionally, I think it’s worth emphasizing that a lot of what we subjectively believe to be “uniform” is not very uniform at all, which opens up opportunities to treating similarly-students differently within appropriate boundaries. There might be concerns about “experimenting” with a 1L section in the legal curriculum, but, really, 1L professors might have vastly different approaches to a theoretically identical subject, including different exam and grading methodology. Willingness to try “pilot programs” among subsets of law students should extend beyond professors’ academic freedom in the classroom, an acknowledged differentiator among similarly-situated sets of students.
Importantly, Professor Rodriguez highlights the randomized nature of such programs. That’s also essential. Many students opt to take certain things, like bar prep classes, clinics, or externships. That self-selection means that we may lose the ability to identify any independent value those programs may have once bias clouds the results—for instance, self-motivated students may opt for a bar prep class over a fellow student with similar grades who lacks the motivation, and it may tell us less (if anything) if the first student passes the bar but the second doesn’t.
In short, it requires political will and time from invested professors to make some of the changes Professor Rodriguez identifies. Unfortunately, it appears little of significant has happened in legal education, even in the face of dropping bar exam pass rates, in recent years. Some schools and some isolated programs may be doing some things, but even those haven’t been deemed so wildly successful that other schools are racing to replicate them. Let’s hope there’s more movement in the years ahead.