Yeah, I’m going to write about Sebastian Thrun’s pivot again. Why? Because every time I look at that story, I find something else worth writing about in it. In fact, with additional perspective, I’ve come to believe the most important part of that whole article is his use of the word “profound” here:
“I’d aspired to give people a profound education–to teach them something substantial,” Professor Sebastian Thrun tells me when I visit his company, Udacity, in its Mountain View, California, headquarters this past October. “But the data was at odds with this idea.”…
“We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don’t educate people as others wished, or as I wished. We have a lousy product,” Thrun tells me.
The question then becomes what made Udacity’s courses a “lousy product?” Why couldn’t their courses be “profound?” I’d argue that it’s the lack of the human element. Appointing untrained “mentors” couldn’t get SJSU kids through introductory math. Likewise, turning math into a solo game or giving kids iPads on school trips won’t make a difference either because computers can’t give anyone a profound education. Only people can. Thrun wanted to create teaching machines, but it turns out his machines can’t do what they’re supposed to do with respect to the people who need a profound educational experience the most.
To be fair, this problem goes well beyond MOOCs. Education is an inherently labor-intensive process, which explains why everyone who’s trying to automate it will eventually have to come to the same conclusion that Thrun did. For the sake of variety, consider another subject that I’ve been meaning to get back to for a long time now: automated essay grading.
A while back, Elijah Mayfield of Lightside Labs, made a couple of really interesting appearances over at the tech blog e-Literate. Lightside Labs is working on using computers to assess essays. Notice my change of word there? They specifically state their goal is to help teachers deal with student writing in large volumes, not to do that job for them. Elijah was also incredibly clear that he wants to do this to make assigning writing more rather than less feasible. In short, these people are a lot more teacher-friendly than Sebastian Thrun is.
In order to make these noble intentions feasible, Lightside Labs needs actual teacher-graded essays in order to train the computer to recognize patterns. Do away with teachers and the whole program falls apart. Yet even with teachers playing a huge role in the machine-grading process, there are gigantic holes in what their program can grade well:
Longer reports, like a 10-page term paper, are probably not a good fit for automated assessment. When you start adding section headings, writing becomes less about good style and content, and more about the organization of a document and the flow of information. These aren’t a perfect fit for LightSide’s strengths.
LightSide isn’t going to check grammar. In fact, we don’t have any modules built in that test students’ pluralization, subject-verb agreement, or other textbook rules. If teachers give poor scores to writing that breaks particular rules, then LightSide will learn to do the same. Fundamentally, though, we don’t believe it’s our place to choose what rubric to grade on and what rules to prescribe. That should be up to the teacher; our intelligent software will learn from teacher grades.
Finally, remember that LightSide doesn’t fact-check. Our software learns to spot vocabulary and content that looks like high-quality answers. Sometimes, students will write coherent, well-formed, and on-topic essays that make inaccurate claims about the source material. Usually, our algorithms won’t be intelligent enough to spot well-written but untrue claims. It will, however, grade their writing quality, which is what we aim for.
What’s left isn’t exactly a profound educational experience, is it? I’d argue that’s it’s pretty much rote learning in order to please a computer, rather than the living, breathing person who assigns you a final grade. More importantly, the inability to fact check pretty much disqualifies their system from use in a history class right there. To be fair, I don’t think the Lightside Labs program is aimed at my discipline, but imagine a computer grading program that could be. Imagine that someone had created a program that can tap into all the published sources on the Internet so that it can tell that the War of 1812 actually began in 1812. Is that going to make a profound educational experience possible?
Of course not. History is no more about learning facts than baseball is about learning to hit the ball. History is full of countless subtleties and intricacies that defy immediate understanding. More importantly, there is a human element to both history and baseball that defies easy description. If college, as Thrun suggests, is really all about obtaining eventual employment, then grading by computer is a huge step backwards because no computer will ever be the boss of you. When your boss is a human being, they bring all the foibles that human beings bring to any position of power. Learning to follow the wishes of your professors, even the arbitrary ones, may be the best on the job training that you’ll ever have.
If I’m wrong and the computer will end up being your boss, then I’m afraid we’re all screwed already. Not only will all the cost savings associated with automation flow to the owners of capital, the quality of complicated services that computers provide will become less effective almost by definition. Here’s Nick Carr writing in the Atlantic:
Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.
That whole article is well worth the read in order to understand the implications of automation in many areas of modern life. It’s not really a Hal 9000 argument at all. Carr’s point seems to be that we’ll forget how to do things, or because these actions will become so systematic, only do them badly if we rely too much on automation to get essential jobs done. While the immediate effects of an automated education may not be plane crashes, to me there would still be an inevitable, obvious drop off in quality.
Last year, I proposed a Turing Test for judging the effectiveness of online education. If a student can’t tell whether they’re being taught by a computer or a person, then the computer is doing a teacher’s job as effectively as a teacher can. But I still don’t think we can ever reach that point.
Machines can teach you rules, but they can never teach you how to break them or especially when breaking the rules is the appropriate response to a particular situation. No wonder Sebastian Thrun wants to do corporate training now. The people most willing to paying for his services are perhaps the only people in society who want education to produce yes men who will never color outside the lines. Call that what you want, but it certainly isn’t a profound educational experience. No matter how powerful our future robot overlords eventually become, only other well-trained human beings can provide that.