students using computer

Share

How Does Crowdmark Use Artificial Intelligence?

Welcome to the fourth and final part of Crowdmark’s ‘Rise of ChatGPT’ series, which explores the impact of chatbots on education and grading in 2024.

We’ve covered the rise of ChatGPT and AI-driven chatbot technology , along with the impact of this technology on grading and its implications for academic integrity .

This time, we’ll explore how the Crowdmark team and system use AI tools.

What’s the Crowdmark approach to artificial intelligence?

Crowdmark has been using artificial intelligence to support grading since 2016. Crowdmark’s formal policy on artificial intelligence is to free educators from tedious administrative tasks so that they can focus on providing rich formative feedback to students. 

“We do not seek to replace educators by grading with AI as we see teaching and learning as a fundamentally human-to-human experience.”says Michelle Caers, CEO

This developing area is of considerable interest for the company given its broad potential impact on grading and higher education.

“Crowdmark’s approach to artificial intelligence is quite strategic,” continues Caers. “Our mission is to improve student learning through assessments: that’s always our priority. We believe students learn best when they receive thoughtful feedback from their instructors. All our technology is deployed to serve that mission.”

Jamie Gilgen, Crowdmark’s product development lead, agrees. “In general, we automate anything that does not have bearing on a student’s direct learning experience, such as getting content into the platform or organizing it for grading teams. That’s always been part of our planning. We oppose solutions that automate point values or responses for open response questions. We never want to minimize or replace human-to-human interaction.”

How Crowdmark’s Platform uses AI now: Optical Character Recognition

In 2016, Gilgen leveraged existing artificial intelligence technology to sort and match student work. Her script analyzes the handwriting sample through an AI function called optical character recognition (OCR). Its convolutional neural network functions a little like a human eye to allow the platform to ‘see’ the handwriting, recognize the characters as letters, and match them to information in the student’s ID record within Crowdmark’s platform.

Optical Character Recognition automatically matching a student with their Administered (in-person)  Assessment  

“Initially, I spent a long time working through handwriting samples to check the algorithm’s output,” says Gilgin. “I penalized the algorithm when it failed, as opposed to identifying positive results, which helped it to get better at selecting the correct input. That technology has improved a lot since then and we are currently exploring new ways to improve this automation.”

“This code, which uses AI to match handwriting to known letters, removes significant work that the instructor would otherwise have to perform to read the handwriting and manually tell us who the work belongs to,” continues Gilgin. “That kind of lift, which doesn’t interfere with the learning process, is an obvious place where AI helps us to win. The user never experiences the AI’s work, but its presence makes their use of Crowdmark infinitely better.”

More AI uses: Marking multiple-choice bubble sheets

When the platform first launched over a decade ago, Crowdmark didn’t include marking bubble sheets. “We were focused on our core product which was to enable educators to digitally grade handwritten work,” Caers explains. “But over time we saw an opportunity to include automated marking of multiple choice bubble sheets. By leveraging artificial intelligence, educators can use regular copy paper for the answer sheets, which don’t require expensive proprietary machines to mark the responses. This approach seamlessly fits in with our paper-based assessment workflow.”

Crowdmark’s first iteration of this tool was built over three to four months, and then upgraded to be more accurate and allow more questions per page. 

“We use the same OCR technology that lets us sort handwriting samples to mark multiple choice and true/false questions,” says Gilgin. “If our system can’t recognize a given response, the human marker is invited in to moderate those answers.  

What does the future hold for Crowdmark and AI? 

“We’re keeping an eye on the market. There are companies using large-language modules to read student work, analyze it and provide feedback,” says Gilgen. “We’re opposed to building around solutions that de-center faculty input and 1:1 interaction.”

That stance reflects Crowdmark’s academic origins. Dr. James Colliander, the company’s co-founder, is a mathematics professor who came up with the idea to solve his own marking challenges by tackling the rote aspects of marking, not the human interaction.

“We do see opportunities to offer better-quality feedback with AI,” says Paul Mitchell, product designer. “We could use sentiment analysis to help graders see where they’re more critical in their marking and where they’re more lenient, either within an individual’s comment set or across a marketing team.”

Without AI tools, that kind of tone analysis can only be done by spot-checking by the lead instructor. “Given the time commitments our users already have, that kind of quality check is sporadic and labour-intensive. Making it easier to give marking teams feedback on their tone and bias, or to compare the work of graders within a marking team, could be a value-added tool for our community,” says Gilgin.

As Crowdmark considers new ways to integrate AI tools into its platform, educators can be assured that the team’s focus will be on protecting human interaction above all other priorities. “We’re unwavering in that stance,” says Caers.  

Looking to try out Crowdmark in your courses? Sign up for a free trial today.

About Crowdmark

Crowdmark is the world’s premiere online grading and analytics platform, allowing educators to evaluate student assessments more effectively and securely than ever before. On average, educators experience up to a 75% productivity gain, providing students with prompt and formative feedback. This significantly enriches the learning and teaching experience for students and educators by transforming assessment into a dialogue for improvement.