An automated education is a contradiction in terms.

18 11 2013

Yeah, I’m going to write about Sebastian Thrun’s pivot again. Why? Because every time I look at that story, I find something else worth writing about in it. In fact, with additional perspective, I’ve come to believe the most important part of that whole article is his use of the word “profound” here:

“I’d aspired to give people a profound education–to teach them something substantial,” Professor Sebastian Thrun tells me when I visit his company, Udacity, in its Mountain View, California, headquarters this past October. “But the data was at odds with this idea.”…

“We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don’t educate people as others wished, or as I wished. We have a lousy product,” Thrun tells me.

The question then becomes what made Udacity’s courses a “lousy product?” Why couldn’t their courses be “profound?” I’d argue that it’s the lack of the human element. Appointing untrained “mentors” couldn’t get SJSU kids through introductory math. Likewise, turning math into a solo game or giving kids iPads on school trips won’t make a difference either because computers can’t give anyone a profound education. Only people can. Thrun wanted to create teaching machines, but it turns out his machines can’t do what they’re supposed to do with respect to the people who need a profound educational experience the most.

To be fair, this problem goes well beyond MOOCs. Education is an inherently labor-intensive process, which explains why everyone who’s trying to automate it will eventually have to come to the same conclusion that Thrun did. For the sake of variety, consider another subject that I’ve been meaning to get back to for a long time now: automated essay grading.

A while back, Elijah Mayfield of Lightside Labs, made a couple of really interesting appearances over at the tech blog e-Literate. Lightside Labs is working on using computers to assess essays. Notice my change of word there? They specifically state their goal is to help teachers deal with student writing in large volumes, not to do that job for them. Elijah was also incredibly clear that he wants to do this to make assigning writing more rather than less feasible. In short, these people are a lot more teacher-friendly than Sebastian Thrun is.

In order to make these noble intentions feasible, Lightside Labs needs actual teacher-graded essays in order to train the computer to recognize patterns. Do away with teachers and the whole program falls apart. Yet even with teachers playing a huge role in the machine-grading process, there are gigantic holes in what their program can grade well:

Longer reports, like a 10-page term paper, are probably not a good fit for automated assessment. When you start adding section headings, writing becomes less about good style and content, and more about the organization of a document and the flow of information. These aren’t a perfect fit for LightSide’s strengths.

LightSide isn’t going to check grammar. In fact, we don’t have any modules built in that test students’ pluralization, subject-verb agreement, or other textbook rules. If teachers give poor scores to writing that breaks particular rules, then LightSide will learn to do the same. Fundamentally, though, we don’t believe it’s our place to choose what rubric to grade on and what rules to prescribe. That should be up to the teacher; our intelligent software will learn from teacher grades.

Finally, remember that LightSide doesn’t fact-check. Our software learns to spot vocabulary and content that looks like high-quality answers. Sometimes, students will write coherent, well-formed, and on-topic essays that make inaccurate claims about the source material. Usually, our algorithms won’t be intelligent enough to spot well-written but untrue claims. It will, however, grade their writing quality, which is what we aim for.

What’s left isn’t exactly a profound educational experience, is it? I’d argue that’s it’s pretty much rote learning in order to please a computer, rather than the living, breathing person who assigns you a final grade. More importantly, the inability to fact check pretty much disqualifies their system from use in a history class right there. To be fair, I don’t think the Lightside Labs program is aimed at my discipline, but imagine a computer grading program that could be. Imagine that someone had created a program that can tap into all the published sources on the Internet so that it can tell that the War of 1812 actually began in 1812. Is that going to make a profound educational experience possible?

Of course not. History is no more about learning facts than baseball is about learning to hit the ball. History is full of countless subtleties and intricacies that defy immediate understanding. More importantly, there is a human element to both history and baseball that defies easy description. If college, as Thrun suggests, is really all about obtaining eventual employment, then grading by computer is a huge step backwards because no computer will ever be the boss of you. When your boss is a human being, they bring all the foibles that human beings bring to any position of power. Learning to follow the wishes of your professors, even the arbitrary ones, may be the best on the job training that you’ll ever have.

If I’m wrong and the computer will end up being your boss, then I’m afraid we’re all screwed already. Not only will all the cost savings associated with automation flow to the owners of capital, the quality of complicated services that computers provide will become less effective almost by definition. Here’s Nick Carr writing in the Atlantic:

Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.

That whole article is well worth the read in order to understand the implications of automation in many areas of modern life. It’s not really a Hal 9000 argument at all. Carr’s point seems to be that we’ll forget how to do things, or because these actions will become so systematic, only do them badly if we rely too much on automation to get essential jobs done. While the immediate effects of an automated education may not be plane crashes, to me there would still be an inevitable, obvious drop off in quality.

Last year, I proposed a Turing Test for judging the effectiveness of online education. If a student can’t tell whether they’re being taught by a computer or a person, then the computer is doing a teacher’s job as effectively as a teacher can. But I still don’t think we can ever reach that point.

Machines can teach you rules, but they can never teach you how to break them or especially when breaking the rules is the appropriate response to a particular situation. No wonder Sebastian Thrun wants to do corporate training now. The people most willing to paying for his services are perhaps the only people in society who want education to produce yes men who will never color outside the lines. Call that what you want, but it certainly isn’t a profound educational experience. No matter how powerful our future robot overlords eventually become, only other well-trained human beings can provide that.

About these ads



12 responses

18 11 2013
Laura Gibbs

THANK YOU for this. Exactly what I wanted to say… but I haven’t got the patience or stamina to say it, ha ha. This metaphor of the Turing Test is excellent; I think I had missed that before. :-)

18 11 2013
Audrey Watters (@audreywatters)

Ah, the subject of my forthcoming book: the automation of education. As a historian, I’m sure you know that this has been a long-running effort — perhaps the history of 20th century education technology.

Also, have you read “The Most Human Human”? It’s about the Loebner Prize, given annually to computers/humans that can fool judges with the Turing Test. It raises interesting questions about humans are becoming increasingly mechanized in our communications. In other words, it isn’t simply that a student would try to judge whether or not they’re interacting with a human or a computer; it’s that both educators and students are being prompted to be more like a machine. (Standardized, efficient, etc).

18 11 2013
Jonathan Rees


Way back when I was younger and way less cynical than I am now, I wrote this:

It’s about the Taylorization of education through standardized testing: breaking learning down into discreet bites so that administrators can tell teachers how to teach. What I never imagined when I wrote that is that the teacher’s job would ever actually be physically divided by technology into content provision and actual instruction. It’s not as if I thought that the technology couldn’t do this. Taped lectures are indeed a very old thing. What I never imagined is that anybody would ever think that breaking teaching into parts without putting it back together again could be a successful strategy.

Solutionism certainly is one powerful idea.

19 11 2013
The King of MOOCs Abdicates the Throne (The Slate) | Sunoikisis

[…] same room with them—and, accordingly, Chafkin’s unabashed display of sycophantic longing has blazed up the […]

19 11 2013
“I know a dead parrot when I see one and I’m looking at one right now.” | More or Less Bunk

[…] Yes, I know that the MOOC hype continues unabated. And yes, I know that Thrun insisted this morning that his academic MOOCs are in fact only resting. Nevertheless, all of us living, breathing educators who actually know all of our students’ names understood that xMOOCs were a stupid idea from the moment we first heard about them because students who need higher education the most simply cannot teach themselves. […]

19 11 2013
20 11 2013
Udacity Pivot | LectureMonkey

[…] latest pivot. A lot of ink has been spilled analyzing the article and it content (e.g. here, here, here and here). Pundits aside, one of the most interesting assertions in the article is […]

20 11 2013
MATH ATTACK! - Udacity Leaves University Environment

[…] the same room with them—and, accordingly, Chafkin’s unabashed display of sycophantic longing hasblazed up […]

20 11 2013
The King of MOOCs Abdicates the Throne | eTraining Pedia

[…] same room with them—and, accordingly, Chafkin’s unabashed display of sycophantic longing has blazed up […]

26 11 2013

The one thing I want to think about a bit, though, is the difference between automated and self-managed. There are situations in which the learner persists and develops skills that are then measured by the capacity to do things, and this produces sufficient intrinsic reward to keep going.

I’m thinking here about all of the basic assumptions behind computer game playing. That’s the thing we all forget about MOOCs: what’s in their recent history, conceptually, isn’t education but massive game-playing. I watch my kids with Minecraft, and I’m in awe of a) how quickly they figure out complex skills within the environment b) how readily they transfer these skills to other similar environments. I can see the temptation to think about these environments as educational.

So then the tension is around the question of the profundity that comes from connecting to another human and her or his thinking, whether through reading or being in person together. What is it that we learn from being taught? Lovely UK blogger Plashing Vole has a terrific post up about this right at the moment.

No disagreement at all on the pivot. It’s exactly what it looks like. If you can’t monetize or monopolise higher education because you completely underestimated the cultural, political and social complexity of 95% of it, try a different market. Issue press release saying this is what you meant to do all along.

26 11 2013
Jonathan Rees

As usual, you’re way ahead of me. The reason I don’t dismiss all MOOCs out of hand is that the cMOOC could be cool for people who are self-motivated and already know how to learn. But that’s not where the money is. As a result, commercial MOOC providers try (and fail) to bang a square peg (xMOOCs) into a round hole (large intro classes) and everybody loses.

24 01 2014
The “outer limits” of online education. | More or Less Bunk

[…] out. Yes, computers can grade for grammar and for sentence structure, but (as I’ve explained before) they can’t do anything with respect to ideas let alone the moral judgments that come with […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 2,225 other followers

%d bloggers like this: