This article was first published in The Chronicle of Higher Education and is republished with permission of the author and The Chronicle.
For the first 18 years of my academic career, I ran into the same problem every semester. It happened at about the 13-week mark: I would share a tearful farewell with my family and begin serving my sentence in Grading Jail. In that moment, I would look back on a career of repeat offenses against efficient and timely grading of student work, and see clearly that I had no one to blame but myself. I was a hopeless recidivist.
Or so it seemed. Remarkably, the hard time I served was enough to rehabilitate me, and turn me into a productive member of grading society. And now — since we’re at that point of the semester — I’m ready to share what I’ve learned in hopes of saving others from the academic clink.
But first (and before I beat the jail metaphor any further into the ground), I ought to disclose that my own relationship with grades is an ambivalent one. I think too much emphasis is put on grades by both students and institutions, I don’t think a single grade is representative of a student’s academic ability, and I firmly reject the idea that grades reflect intelligence or potential. That said, I also realize the need to assess student work in a consistent and understandable manner. In a perfect educational world, there would be individualized assessments — formative and summative — and in-depth conferences in which professors and students could share and discuss these narratives. In our imperfect world, grades are still a feature of the academic landscape, and we owe it to students to fairly use the tools we have, no matter how flawed.
Prompt feedback may be a “best practice,” but too often in the semester, we honor that injunction primarily in the breach. Thus, in a paroxysm of equal parts guilt and panic, we lock ourselves in Grading Jail — hard labor with no parole until we’ve atoned for our (procrastination) sins. The all-night grading binge is problematic, though. Are we really giving effective and thoughtful feedback to students at 3 a.m., after we’ve read 25 (or more) of their classmates’ essays? Are the standards applied to the final paper the same as the ones used to evaluate the first, so many hours and cups of coffee ago?
Here, then, are the three strategies I’ve found most helpful in the continuing quest to better manage my grading workflow and stay out of trouble.
- Pre-semester calendaring. Technically, this isn’t a strategy you could plug-and-play in the middle of the term to ease your grading workload. But for me, once it became a habit, it has been invaluable.
Before classes start, as I’m drafting my syllabi, I print out calendars for every month of the term and lay them out on my desk. Using different colored markers for each section/course, I plot out the due dates for every assignment I will give throughout the semester. A cluster of different colors in a three-day span is a quick visual cue that I ought to reconsider some due dates. Is there a distinct pedagogical need to collect a stack of exam books from one course, and a pile of essays from another the next day? Or can I space out those due dates differently? I know this sounds head-slappingly simple, but how many of us really do this sort of careful planning and comparison in advance? Judging from the litany of “I have to grade four sections of papers” lamentations on my Twitter feed, it’s a strategy that more of us should consider. Sometimes the simple steps pay off exponentially in the long run.
- Rubrics — done well — are your friend. I was a rubric skeptic early in my career, but with education and experience, I’ve become a big fan of them for much of my grading. The initial impetus for me to consider rubrics was the realization that I was using essentially the same set of comments for much of my feedback across classes and assignments. How many times do I want to write “use a specific example here” or “awkward phrasing — please rework?”
My initial solution was to have a Word document with the phrases I used most often open while I graded, and then cut-and-paste the appropriate comment as needed. I ran into two problems with that strategy, though: First, I had to be grading student work electronically to use it, and second, it became patently absurd. If I was writing the same comments over and over, maybe I needed to revisit just how clear my criteria were to my students. As I wrote out explicitly my criteria for evaluating student work, I also realized that I often didn’t apply them evenly. I mean, it’s easy to be seduced by a beautifully written essay, even if it says little of substance — and especially if it comes on the heels of four stinkers in a row. Was I being as fair as I could be? And how would I know if I was? That was where rubrics came in for me, after I did some research and consulted with colleagues.
Constructing a rubric involves a significant investment of time on the front end, but once designed, using it to assess student work cuts my grading time by more than half. I’m not writing the same basic comments over and over, because they’re on my rubric, and I can circle or highlight them there. I use the time I’ve saved to concentrate on more meaningful individual feedback. Most important, having specific criteria and clearly defined benchmarks gives me the assurance that I’m being as consistent as possible in my grading. Indeed, my assignment design has improved as a result of forcing myself to define specific learning outcomes, and how I plan to assess them. An additional advantage: If students have the rubric in front of them as they work, ambiguity and guesswork (as well as the anxiety those can produce) are eliminated from the process. That’s no small thing when it comes to a high-stakes assignment like a final research paper, for example. (Of course hastily written or vague rubrics don’t provide any of those benefits, and indeed may exacerbate the very problems they were intended to solve.)
A caveat: Advance distribution of a rubric shouldn’t be your only conversation with students about your expectations. A good, detailed rubric promotes transparent criteria, consistently applied. As the only reference point, though, it becomes easy for students to “write to the rubric,” creating homogeneity and blandness rather than giving them the freedom to achieve learning outcomes in a creative and genuine way.
- I can talk faster than I write. So can you, I imagine. In the last couple of years, speech-to-text options (Google’s Gboard mobile app, for example) have proliferated. Dictating comments into a Google Doc and using speech-to-text to transcribe them in real time is one way to provide substantial feedback on a large amount of student work without developing carpal tunnel syndrome.
However, I’ve found it even more meaningful to record my comments and then share them with individual students via an audio file they can listen to on any device. I stumbled into this method out of desperation several years ago; I was woefully behind on grading student essays and needed a way to get through them quickly without skimping on feedback, so I decided to do a virtual “talk-through” of the papers for each student. I used a voice-recorder app on my tablet, and recorded myself talking through the paper with summary comments at the end, which took about six to eight minutes for each essay. Then I saved the files in Dropbox folders and gave students a link to their feedback folder so they could stream or download the audio as they wished. I came to that method independently, but subsequent research showed me that audio feedback has been a practice in some quarters for both face-to-face and online courses. The research also affirmed both my initial impressions and my students’ reactions: My feedback felt more personal, it balanced specific and global commentary, and students felt like they paid more attention to my audio comments than they did to standard written feedback. Since then, I’ve streamlined my practice a bit: I read through a paper, making cursory notes in the margins. Then I formulate my overall summary and decide which themes or issues I want to capture. I record my talk-through on a voice-recorder app (I use Voisi, but there are scads of free apps out there). I begin with the summary; I tell the student what I think the paper’s strengths are, and what I’d like them to focus on for the next draft or assignment. Then I do a brief talk-through of the paper, not to point out every specific error or problem, but to give my general feedback. Because audio files are sometimes too large to attach to an email, I upload them to Dropbox and send a shareable link to the student. What I’ve found is that — especially for large-scale projects and written work — audio feedback cuts my grading time just about in half, without sacrificing the depth or quality of feedback.
Those three strategies have transformed grading from something I’ve always dreaded into something that I … well, enjoy is too strong a word. But I am now able to provide timely and meaningful assessment without locking myself away for days at a time. Professors are remarkably like our students in many ways, perhaps most obviously in how we sometimes flail around trying to manage the end-of-the-semester crush. And just as our students don’t do their best work in all-night cram sessions, neither do we. For those of you who share my ambivalence about the value of grading itself, there are ways to turn it into a more meaningful, collaborative project — for example, Cathy Davidson’s Peer-to-Peer Assessment and Contract Grading models, and Linda B. Nilson’s Specifications Grading framework. But they take time to learn and adapt. And in the midst of a semester, we aren’t blessed with a lot of extra time or motivation to do that sort of long-term reflection and rethinking. You might use the winter or summer breaks to carry out a broad overhaul of your grading practices.
In the meantime, consider these three strategies the academic equivalent of a “get out of jail free” card. The more we can ensure consistency and fairness, the less likely we are to be beset with student complaints, and the better the chances of students actually putting our feedback to use in their subsequent work — which is the whole point, right?