“The machine process pervades the modern life and dominates it in a mechanical sense. Its dominance is seen in the enforcement of precise measurements and adjustment and the reduction of all manner of things, purposes and acts, necessities, conveniences and amenities of life, to standard units.”
I’ve been devoting this entire blog to the subject of edtech for many weeks now, and I can’t believe I haven’t touched upon the subject of assessment yet. Like most of the academics I know, I find the notion of assessment somewhat insulting. It’s not as if I’m uninterested in knowing whether I’m teaching my students what I want them to know. It’s that most of the methods of assessment devised to figure that out are not geared towards assessing the kinds of skills that I want them to learn.
Facts go in one ear and out the other. Skills, on the other hand, are supposed to last a lifetime. The problem is that assessing whether students have those skills is inherently subjective: not arbitrary, just subjective. The vast majority of the tools that I’ve seen designed to measuring the effectiveness of that process simply don’t take these subjective goals seriously. Indeed, they reflect the discipline of the machine, valuing qualities like precision and convenience over the kinds of questions that have no easy answers that we historians cherish.
This kind of pressure has got to be about a hundred times worse when the course in question is being taught entirely online. I can just hear the administrators now, “All the other online instructors assign multiple choice tests. What makes you history teachers so special?” Here’s MfD, writing in the comments of an edtech blog that I wouldn’t have been reading if she hadn’t tweeted it:
Edtech needs also vary really significantly between, say, Physics and History, but instead we’ve seen the one-size-fits-all design strategy. As a result one of the most difficult elements in any RFP is trying to offset gains for one against losses for the other; and I think part of the reason that there’s a rising tide of edtech disaffection from humanists is that the powertools of assessment driven analytics work best for disciplines that have strong engagement with, for example, quiz based assessment. Handily these are also disciplines that can be most comfortable with textbooks, in whatever form.
The situation is so much more complicated for people whose bread is buttered in the non-STEM disciplines, especially given that their efforts to deploy the Socratic method (as we’ve just seen in Utah) can be really muddled if students are being trained by their institutions to expect content-driven testing as the reward for lecture attendance.
Square peg, meet round hole. I’m not saying that the makers of various learning management systems couldn’t create systems that would allow me to teach history the way I want to teach history, but they have absolutely no financial incentive in order to provide me with those tools and every financial incentive in the world to keep to one size fits all.
If you follow this stuff closely as I’ve been doing you can find plenty of instances where the edtech companies make the right noises about creating professor-centered platforms, but if you read the entire post that MfD is commenting on above you’ll see that this particular expert is predicting a choice between two or three platforms for everyone. That’s not choice. That’s Coke or Pepsi.
What if I don’t like cola?
My fear would be that the need to assess learning will be the club used to force professors into a learning management system that doesn’t reflect the concerns of their discipline, particularly history or English. When the rubber meets the road, the companies that sell education technology will follow the money and side with the administrations that prefer the discipline of the machine rather than the freedom to teach whatever way we like, which those of us who teach face-to-face still enjoy…unless we teach in Utah.