Hands on keyboard and screen with multiple choice options

Share

Multiple Choice vs. Written Assessments

Updated article originally published August 9, 2016.

One of the most noticeable differences between higher education STEM and Humanities courses is their approach to assessment design. Commonly, STEM use multiple choice questions (MCQs) and Humanities use writing. These strategies are generally thought to be completely opposed, with little to no symmetry between them. However, despite popular belief, there may be a way to achieve a harmonious coexistence between the two.

There is no set rule on why STEM and Humanities have separate assessment styles. The divide may be attributed to a comfort professors have with teaching the way they were taught, a need for students to learn core concepts before integrating them into real-world solutions, and/or a lack of time to alter course design.

The assessment-type preference also has a significant connection to subject matter. Hard sciences lean towards specific ‘right and wrong’ answers and tend to align with multiple choice assessments. Meanwhile, society and culture are more abstract and lend themselves to written argumentation.

What do MCQ and Written Answers Assess?

David Nicol, writing in the Journal of Further and Higher Education, provides a solid analysis of the differing scholarly perceptions on MCQs. “Many researchers discourage the use of MCQs, arguing that they promote memorisation and factual recall and do not encourage (or test for) high-level cognitive processes.” Others maintain that multiple-choice questions’ ability to assess “depends on how the tests are constructed” and contend that “they can be used to evaluate learning at higher cognitive levels.”Debates about MCQs are consistent, unresolved, and ongoing. Oftentimes, MCQs are judged in direct comparison to their counterpart, written assessments.

With written assessments, there is less discussion surrounding what they attempt and achieve. Since “writing” is a bit broad, it is easiest to formalize an understanding of this assessment type as either long or short answer questions. Ostensibly, these types of questions offer a different challenge, as they do not offer any options of answers; rather, they offer a prompt that inspires critical thinking. This is what leads to the perception that written responses are more effective in supporting higher level thinking However, this type of assessment is often foregone due to practical challenges regarding grading. Especially in online learning, MCQs allow instructors an ease-of-use assessment design that works well for team grading.

These discussions come to a head in Megan A. Smith and Jeffrey D. Karpicke’s 2013 article, which includes findings that “challenge the simple conclusion that short-answer questions always produce the best learning, due to increased retrieval effort or difficulty, and demonstrate the importance of retrieval success for retrieval-based learning activities.” These researchers emphasize the significance of retrieval in learning, as it is a measurable form of educational progression. This leaves educators with an unclear picture of which way to go.

A Question of Hybridity?

In an effort to find a solution to how to effectively utilize the two seemingly opposed assessment designs, some suggest that a targeted mixture of both may be the most effective way to enhance retrieval. In a 2014 article from The Journal of Experimental Psychology, researchers tested students throughout a term with a mixture of quizzes that used MCQs and short answer questions, and simply found that the act of testing students helped them perform better on the final exams—with no distinct relationship between the results, quiz design, and final exam design.

What could this mean? As long as questions are well-conceived, they will assist with retrieval. One extension of this could be that the content should match the method; whether the question can be answered best with multiple choice or written depends on a judgment call by the instructor. This was the traditional root of the STEM vs Humanities approach.

For scientific questions like chemical compound nomenclature, MCQs are appropriate. For philosophical questions like “how does Deluze interpret Spinoza,” written answers are more effective. However, what may be so frequently lost in these generalizations is the value in the other approach. For STEM instructors, maybe including a written question surrounding some type of theory of controversy would help students think more critically and abstractly about their subject. Similarly, Humanities instructors could mix in some MCQs, directed at testing comprehensive disciplinary fluency.

About Crowdmark

Crowdmark is the world’s premiere online grading and analytics platform, allowing educators to evaluate student assessments more effectively and securely than ever before. On average, educators experience up to a 75% productivity gain, providing students with prompt and formative feedback. This significantly enriches the learning and teaching experience for students and educators by transforming assessment into a dialogue for improvement.