Students writing exams Photo courtesy of Fabian Pittroff

As we reach the end of the first month of school, many students are facing their first test of the year. This can be nerve-wracking: a new set of instructor expectations, unfamiliar material, and potentially high stakes. All of this raises a question: what’s the point of assessment, anyway? The rhetoric around standardized testing would have you believe that there is no point — but actually, this isn’t the case. When used intelligently, assessments can help students learn better and show instructors where to focus their efforts.

To begin with, not all assessments are the same. There are several different major types (and all of these could be further subdivided): long-form written (e.g., term papers, lab reports, mathematical proofs); short-form written (e.g., blog posts, reading responses, or problem sets); exams (e.g., midterms, in-class tests, oral examinations); and quizzes (including drills). In other words, we can divide assessments into where they occur (at home vs. at school) and how long the questions should take to answer. They could also be divided by question type: open-ended (multiple ‘correct’ answers), directed (one or a limited number of correct answers), and standardized (multiple-choice).

Each of these categories illuminate a different aspect of the learning process. For example, a quiz is helpful for assessing superficial knowledge or immediate comprehension. Such assignments may be informal, even ungraded, in order to offer a snapshot view of students’ current abilities. In contrast, a take-home assessment shows a student’s deeper understanding of a topic, and is often summative. For an open-book assignment, students don’t get points for knowing facts, like the quadratic equation or the year of the Anschluss. These are easy to look up. Instead, they get credit for correctly realizing that the quadratic equation is the correct method for solving a word problem, or for being able to elucidate some of the reasons behind Hitler’s annexation of Austria. While it is still possible to lose points for not knowing the facts (for example, by claiming that the Anschluss was in 1942, or confusing quadratic and Pythagorean theorems), this is because a failure to check for such simple errors is sloppy. On an in-class, closed-book test, however, these mistakes do not take away credit; they simply fail to gain it. This is because they reveal a failure of memory, rather than something more serious.

From the student perspective, there are some advantages to having a variety of different forms of assessment add up to a grade. For example, students often claim (rightly or wrongly) that they are “good at” tests or essays, but “bad at” the other. If their final grades depend solely on the form of assessment that they are “bad at”, they may focus on this fact and their anxiety so much that they do indeed perform worse. On the other hand, if half the grade is test and half essay, they can be more confident in their ultimate course success: their ability to perform well on one makes up for their perceived inability in the other. This allows them to focus their efforts on the area that needs improvement.

It’s also to the student’s advantage to have many lower-stakes assessments than a single (or only a few) assessments that are heavily weighted. Lower-stakes testing shows students where they need to improve and allows them to make mistakes for a relatively small penalty. They are also useful for instructors, who can use such small assessments to see whether a new concept has ‘stuck’ over a break (even one as short as the weekend) or to check if older material needs to be reviewed.

Sometimes (and perhaps ideally), assessment can even lead to consolidation, or solidifying learning. For example, a quiz at the end of a class session or a unit test forces the student to review and remember the material s/he has just learned. This process of reaching back into one’s memory helps the lesson ‘stick’. So does the conscious repetition of studying, which is why rote memorization, while unpopular, can be effective in learning basic facts like multiplication tables. Other ways that assessment can lead to deeper learning is through student reactions. This often occurs with formative assessment, and in a way is similar to guided brainstorming. A student writes a paper and receives feedback; s/he then rewrites in response to the feedback, improving the paper’s written expression and ideas. Depending on the learning goals for the course, a single paper could go through multiple drafts before reaching its final (graded) version. Over the course of these drafts, the student should have learned to analyze more deeply, to write more clearly, and to engage with others’ ideas. If you think that’s too simple for an upper-level course, this is the same method that doctoral programs and publishers use to bring manuscripts to an advanced level.

Testing isn’t all fun and games, and not all assessment data are used appropriately. But you can’t blame the tool for the mistake of its wielder. In the end, ‘assessment’ is not a four-letter word.

About the Author: Jaclyn Neel is a visiting Assistant Professor in Ancient History at York University in Toronto, Ontario.