“Robograders” are on the rise in high stakes testing, and other aspects of our classrooms.
Keep in mind this is just an algorithm looking for patterns in student writing. An algorithm isn’t magic, it’s code (writing) written by people.
“The idea is bananas, as far as I’m concerned,” says Kelly Henderson, an English teacher at Newton South High School just outside Boston. “An art form, a form of expression being evaluated by an algorithm is patently ridiculous.”
The algorithm (code) is looking (and possibly learning) from these patterns. It’s learning if it is fueled by machine learning, or artificial intelligence.
“It’s so scary that it works,” Perelman sighs. “Machines are very brilliant for certain things and very stupid on other things. This is a case where the machines are very, very stupid.”
Also keep in mind this is something that will increasingly make its way into our classrooms. If used in conjunction with other assessment practices and habits, it can be a good way to sort through some initial data.
Perelman says his Babel Generator also proves how easy it is to game the system. While students are not going to walk into a standardized test with a Babel Generator in their back pocket, he says, they will quickly learn they can fool the algorithm by using lots of big words, complex sentences, and some key phrases – that make some English teachers cringe. “For example, you will get a higher score just by [writing] “in conclusion,'” he says.
My big takeaway from this is less about the use of “robograders” and more about poor assessment practices. If the only assessment or data point is an essay…we’ve got bigger issues.