I hate grading (but love rubrics)

Roy Plotnick
5 min readMay 10, 2018

--

I have just finished my thirty-fifth year of teaching at the same university. Over the years, some things have changed immensely. I don’t use a blackboard or an overhead projector, instead I employ a whiteboard and PowerPoint presentations. I interact with my students via e-mail and a flexible online interface that lets me post assignments and materials. I fully embrace this new technology. I have also attended workshops to learn new teaching methods, which I enthusiastically adopt where appropriate. Some things, on the other hand, haven’t changed. I still spend a great deal of time lecturing. I get frustrated when students don’t appear to have studied, to be engaged with the material, or hand-in poorly written papers at the last minute. But most of all, I hate grading.

To be clear, I am not talking about scoring the results of a multiple-choice exam, which can be done by a machine (or at least by an underpaid graduate student). These are easy, in that the answers are either right or wrong and the total is unambiguous. The obvious downside to this is the inevitable, “will you grade on a curve?”, by which the student means “will my crappy score be increased to a passing grade by some mathematical manipulation?” I suspect that very few of them recognize the origins of “the curve” in the ubiquitous presence of the normal distribution. The less obvious downside, which comes from my colleagues in the education community, is that multiple choice tests are very limited in their ability to measure learning. They are very good at seeing if students can regurgitate memorized facts; they are really poor at determining whether they can integrate and use those facts. To do that, we must turn to the essay test and the term paper. It is grading these that has long troubled me.

An essay question should determine not only the facts of a scientific question (“name two major ice ages in Earth history”) but also whether students can integrate them into a broader picture (“compare their impacts on life”). We also want the answer to be well-organized and written. Similarly, a term paper ideally should demonstrate a student’s mastery of a defined subject, by their knowledge of the scientific literature and their ability to integrate it into a coherent narrative. Again, it needs to be well-organized and written. But how do we measure these desired characteristics and use them to objectively assign some letter grade? “A” grades are easy (that is a great paper!), as are “F’s” (did you even study?). The madness lies in between. What is the difference between a “B+” and an “A-” paper or “C-” or “C” essay answer? For a long time, I accepted the often arbitrary nature of these distinctions, which had been implicit in my own education. I often felt a bit of contempt for those who would argue with the professors about their grades; I assumed I got what I deserved, even when it was not always clear why (some function of the number of red marks?) And I passed this on to my own students; the professorial equivalent of the parental “because I said so!” Then, about fifteen years ago, I discovered the rubric. Ah, rubrics, where have you been all my life?

For those not familiar with them, rubrics generally refer to detailed written criteria of what is required to receive a particular grade in an assignment, paper, or test question; they are given to students in advance so that they know what is expected of them. They can be simple or relatively complex. For example, the rubric for exam essay questions in my recent advanced course was:

Content Score

5 Outstanding explanation of topic with superior supporting information; shows excellent command of the literature read this semester; clear and logical, perhaps showing some creativity; goes well beyond minimum needed to answer the question.

4 Good solid explanation; with very good support from examples in the literature read this semester; clear and logical; goes beyond the minimum needed to answer the question.

3 Satisfactory answer, some support from literature read this semester; only includes the minimum information needed to answer the question.

2 Decent answer, but too general or cursory or some inaccuracies or flaws in reasoning; little reference to the literature read this semester.

1 Inadequate answer; poorly reasoned or major inaccuracies; no reference to the literature read this semester to support the answer

0 Answer missing or does not answer the question.

Writing Score

3 Well organized; good grammar; no spelling errors.

2 Decent organization; some grammatical or spelling errors.

1 Disorganized; poor grammar; poor spelling.

0 Similar problems to 1, but worse.

The total score on a question is the sum of the content and writing scores; the grade on the test is based on the total of all scores. This rubric, as well as the grading scheme, are supplied well in advance of the test.

I view rubrics as win, win situations. Students are aware in advance of what is expected and I have a clear guide to grading, which makes it much less arbitrary. The inevitable arguments are much reduced. Rubrics can also speed up grading, something devoutly to be wished. For example, each of our undergraduate course lab exercises had its own scoring, which could range from 9 to 23, depending on how many questions there were. Not long after I learned of rubrics, I introduced a simple 10-point scheme for all of them — much easier.

I must admit I still hate grading. It is still time consuming and often frustrating when students don’t do well (where did I fail them?). And I hate the awkward discussions afterwards when students argue why their grade is “unfair,” something that has plagued every educator since time immemorial and probably will far into the future. Nevertheless, I thank whoever in the education community invented the rubric for assessment and those who introduced it to me. My life, and I hope that of my students, is easier as a result.

(for more on rubrics, see: https://serc.carleton.edu/NAGTWorkshops/assess/rubrics.html)

--

--

Roy Plotnick

Paleontologist, geologist, ecologist, educator. Professor at the University of Illinois at Chicago. Author of Explorers of Deep Time.