**Northwestern University is a research university housing 12 schools and colleges, 21,000 students, and 3,000+ faculty members. They use Crowdmark for collaborative grading, standardizing feedback commenting and bulk editing of point values.**
Ensuring that grading is efficient, consistent, and high quality is a struggle faced by many educational institutions. Northwestern University was no different. When the math department went searching for a grading solution and found Crowdmark, little did they know that it would eventually save hundreds of hours of grading time and be used by multiple departments, including: Economics, Mathematics, Physics, Chemistry, and Psychology.
Grading paper-based exams for large classes was challenging. The sheer length of time it was taking instructors to grade exams was frustrating, and it was taking time away from research, prepping for classes, and helping students.The problem of inefficient grading workflows was exacerbated by the pandemic when all of Northwestern’s courses changed to remote delivery. During this time, they began looking for a solution that would improve the grading experience for the instructional teams.
Additionally, Northwestern aimed to improve the quality of the assessment process for students. With class sizes sometimes in the hundreds, regrading, changing marking schema, and repeating detailed comments on paper-based exams were slowing down the process.
In searching for a solution to these problems Northwestern had a number of goals: Improve the grading experience for the instructional team. Ensure fairness and consistency of the assessment process for students. Gain insights from the archive of graded work.
Northwestern selected Crowdmark to achieve their goals. One of the additional benefits that set Crowdmark apart was its ability to work seamlessly for large-scale courses. With Crowdmark now in place, Northwestern’s grading teams, large and small, can grade a single assessment simultaneously while collaboratively seeing, sharing, and standardizing each other’s comments.
Instructors and TAs can also create comments for students on the fly, and reuse comments when needed. Comments include helpful feedback, hyperlinks to resources, images, graphs, equations/chemical notations, and in some cases also included point values. One of Northwestern University’s favorite features is “bulk editing” where any changes made to a comment will retroactively change for every use case the comment was used. By using comments in these ways, Northwestern was able to achieve their goal of improving the grading experience for the instructional team.
In addition to creating and reusing comments, some instructors import their comments to the assessment comment library as a rubric that populated comments and point values. This ensures that the grading team is grading assessments fairly and consistently, and it mitigates any duplication of the team’s efforts.
When the grading is complete, Northwestern’s instructors use Crowdmark’s question-specific analytics to see where students are struggling on concepts and make adjustments to their courses and/or assessments to improve learning outcomes.
After years of struggling with paper-based grading, Northwestern has been able to achieve all of their original goals using Crowdmark. What started as a solution for the math department, expanded to a university-wide grading and assessment system. And the results are beyond impressive. Instructors have found that Crowdmark reduces grading time by 40% while increasing the quality of the grading. And, because of the rich, clear feedback students are receiving, instructors are getting a reduced number of regrade requests. In total, grading teams at Northwestern have saved approximately 16,660 hours–or 650 days–worth of grading time. A win-win for both instructors and students.