Laura Gibbs deserves some kind of prize for public service. I’ve been tweeting her series of posts about peer grading in Coursera for a while now, but since Audrey Watters has written them up I figured I might as well consider them here too. What you need to know going in is that Laura teaches online for the University of Oklahoma so she’s clearly rooting for Coursera as she takes their course on Science Fiction and Fantasy. I think this makes her indictments of the process all the more damning.
For example, there’s this:
So, what kind of data is Coursera collecting about the efficacy of this process? None. What kind of feedback are people getting on their feedback? None. What kind of guidelines and tips did we get on offering feedback? (Almost) none. Given that this is a skill, and a skill that many people have not had to use in the past, I think we would need a LOT of tips and guidelines to help with that, along with feedback so that people who are just now developing this skill can estimate how well they are doing.
By far the biggest problem, though, is vague and/or inaccurate feedback… and that’s a much harder problem to solve. It’s much like the problem with the poor quality of the essays overall; yes, there are inappropriate essays (blank essays, essays only a few words long, plagiarized essays, even spam essays) that need to be flagged – but the larger problem is the bewildering number of essays that are of such poor quality that it gets very discouraging to spend time on them. Without some kind of additional instructional component to the class, I am just not convinced that this often unreliable and/or unhelpful anonymous peer feedback can really help people to improve their writing.
“Can peer feedback really work in a setting where there is so little community and where this is little sense of reciprocity?,” asks Audrey. Well, that depends upon how you define the term “work.”
If you watched that Daphne Koller TED video, you probably remember the joke about how she tried to convince those terrible humanities professors that multiple choice was a perfectly acceptable way to test for higher order critical thinking and they did’t buy it. Ha ha ha. Unable to do that, they went with Plan B: peer grading. The impression this story left on me was that Coursera was only interested in doing the absolute minimum in order to make their humanities classes acceptable. Certainly, everything Laura has written suggests that they didn’t exactly put much forethought into some pretty basic problems.
But I want to take this point one step further. I would argue that creating an effective peer review process for grading writing is impossible – like chopping down the mightiest tree in the forest with a herring. Since writing is a skill that you never really stop learning, peer grading is therefore almost always the blind leading the blind.
For example, I am about to go into deep seclusion to polish my book manuscript for the last time before it hits the copy editor. It needs polishing because I have a bad habit of using the passive voice the first time I write anything at all complicated. Usually I turn those sentences around when I catch them during proofing, but I don’t always catch them. If your peers don’t know what passive voice is, or (as seems very likely in a lot of these Coursera classes) your peers don’t even speak English as their first language, learning how to write well solely from them is going to be impossible.
Since I teach history, I am prone to think of learning history as an excellent end in itself. However, if you desire employment when college is over, learning how to write well is the best skill that academic history classes can offer you. No wonder employers don’t take job applicants with online college degrees seriously then.
It appears that Coursera is giving them little reason to think otherwise.