On the heels of an idea floated by my colleague Professor Rob Anderson… “Why don’t committees do a blind read of everyone’s stuff before looking at credentials? Yeah it takes resources, but you are making a multi-million $ investment for the next 40 years.”
I’ve thought about this over the last several days and wanted to offer a couple of ideas. These are very much working ideas, so feel free to critique!
Here’s the specific problem I’m trying to solve (and I speak generally, so it may not apply to your particular school, committee, or search!). Too often, at the Association of American Law Schools (AALS) Faculty Recruiting Conference (FCR), faculty hiring decisions at law schools are made by shortcuts. We sift through hundreds of applications between August and October, looking at certain cues like law school attended, academic honors, visiting assistant professor position history, elite clerkships, or other CV items that look like prestige and quality. This often leads to a fairly narrow set of candidates who meet the criteria, and it tends to be those from elite socioeconomic backgrounds. It often demands geographic transience and flexibility from applicants, and it can produce inequalities among candidates that could run along race, sex, or class lines.
But even though schools are looking at these proxies, that’s not what law schools actually desire in candidates. They’re looking for faculty who can write, who can produce and engage in good scholarship. They’re looking for good teachers, who can communicate complicated concepts with clarity to law students. These can be challenges to identify early in one’s career, but that’s what schools are looking to identify.
Many candidates—and perhaps most successful candidates—already have something in the way of scholarship, at least one publication, or even a good draft. After screening interviews in October, that’s what would be used as the “job talk” between October and February. Some schools request that paper between August and October, ahead of the screening interview. And now some applicants can upload that paper with their initial application.
But law schools still primarily filter and screen with these cues or shortcuts first. Reading the scholarship comes later. On top of that, the scholarship is now filtered through the bias of those cues, where readers are inclined to think the work is going to be good because, well, they made it through that filter! And if the article has placed in a sufficiently “prestigious” journal (perhaps even the VAP’s institution’s home journal…) the bias in the reader is increased even more.
So, why not have an opportunity to read scholarship without any of those cues? No resume, no pedigree, no placement information if the article is placed. Just a blind read of articles.
This could take one of two forms.
First, there could be a database, say in July, where prospective academics could upload the article. It would be stripped of their name and whether it was published. Perhaps it would include a few general scholarly fields if schools wanted to winnow their searches. Law school hiring committees could then pull the articles from the database and review them internally. The database would disclose the identity of the author upon request, but not within 14 days of when a law school requested the article (essentially, a forced cool-down period). Schools, now armed with internal blind reviews of scholarship, could identify these candidates in the AALS pool after the FAR distribution occurs.
Second, some volunteer law professors in various fields could offer to do blind review of such articles submitted over the summer. Law professors would assign the article a grade. Articles that “passed,” or that received some sufficiently high enough grade (e.g., 3.5 out of 5 stars) would be publicly identified, with the author. Those with too low a score wouldn’t be identified, which might also mean you simply didn’t submit an article for review.
The second, in some ways, would be something like an NFL “draft grade” for prospective football players leaving college early, where independent evaluators assess their talent and tell them where they’d be projected to go.
The first has the benefit of giving committees the full control over their review processes and needs no recruiting of others to help. The second has the benefit of handling the volume of articles, if many choose to take advantage of the system, and a greater “peer” sentiment.
Of course, these are costly decisions—maybe it’s really all the case that the filtering cues are good at identifying those who go on to be good scholars. Or that it’s not such much it identifies good scholars are provides a first rough cut, and the material review of scholarship can happen after that first cut.
Really, then, the benefit would be for “diamonds in the rough,” those scholars who lack the pedigree but who may have some real promise as an academic.
It might be that blind scholarship review, if it’s gone through a pretty rigorous polishing through mentors in a VAP or PhD programs, also doesn’t help us a whole lot.
And it might also be the case that it places more pressure on writing something early. But, to be frank, it may be that this ship has sailed, and we really do expect people to have started their writing careers before entering tenure-track academia.
But, here are a couple of potential models. Better than the status quo? One preferable to another? Improvements to be made? A third way?