When last we checked in on Sebastian Thrun (a.k.a. this morning), he was misreading the nature of the problem with the Udacity/San Jose State math MOOC experiment. Now, he’s suggesting a solution to the many problems with peer grading written MOOC essays:
When someone writes an essay, you want to give meaningful feedback so they can improve. I’ve seen good progress on the assessment of essays; I’ve seen almost no progress on qualified feedback. And that’s where you have a very simple opinion—you just have people do it. Our classes right now require essay writing, and those essays are being graded by people and it’s just fine, in my opinion. Why not? There are a lot of unemployed people in this country. I don’t think it has to be all computerized.
So here’s Thrun’s MOOC future in a nutshell: 1) a taped superprofessor provides the content, 2) an impoverished Ph.D. does the grading, and 3) a computer that does the actual teaching. On 3), here’s Thrun again (from the same link as above):
We do some of it manually right now, so we analyze student profiles, we make predictions of what are the success rates, and then we intervene manually right now based on the predictions we get from students’ profiles. But we haven’t automated this yet. So eventually it’s going to be a big piece of artificial intelligence that sits there, watches you learn, and helps you pick the right learning venue or task, so you’re more effective and have more pleasure.
Not a professor in sight, not even a glorified teaching assistant offering online supervision. After all, Thrun needs as many unemployed Ph.D.s as possible in order to force down the rates that they’ll be paying their graders.