A Turing Test for online education.

11 10 2012

I picked up the Radiolab habit this summer when I was in South Korea. Since all four tv channels that I could get in my room broadcast entirely in Korean, I had lots of podcast time. Radiolab made sense because I’m getting more science-y as I grow older. [Do scientists get more history-ish as they age?]

Yesterday morning at the gym, I was listening to this very good short on the mathematician Alan Turing. Now, I’d heard of Turing before and I’d heard of the Turing Test before, but it just so happens that I was also thinking about the next post that I’d write for this blog about the same time. As a result, this was the first time that I had ever put Turing and online education together.

A Turing Test, to give a quick definition for the uninitiated, is something Turing proposed as a way to determine whether machines can really think. Basically, he said that if you separate someone from a machine by a curtain and give that person a keyboard connected to the machine, and they can’t tell whether or not they’re communicating with a machine, then that machine is really thinking.

Leaving synchronous instruction out of this, the distances inherent in online education do act as something of a curtain. I propose that if a student can’t tell if they’re being taught online by a person or a computer program, then computers can guide people into higher order learning.

Unlike in Turing’s test, however, I think there’s some evidence to suggest that this kind of instruction is happening now. Let me make an analogy from the world of chess. Recently, I taught my 8-year-old to play. He beats me more than every once in a while because I’m not particularly good at chess. I make far too many dumb mistakes. Nevertheless, I’m proud because I taught him everything he knows (which isn’t all that much). For example, I told him to take over the middle of the board and see what develops. I told him to always castle in order to protect his king. I explained that a rook is more valuable than a bishop or a horse and that he should value the bishop and the horse equivalently. Watching me play has reinforced those strategies.

My son probably would have learned more by playing a computer. My worst failing at chess is that I can’t think more than one move in advance since doing so gives me a headache. Computers can play out almost infinitely, so they likely would have served as a better example for strategy.

If you do play computer chess, you know that the computer isn’t really thinking because you probably understand how the programming works, but imagine a situation where someone was getting their moves secretly signalled to them by a computer and you didn’t know it. You’d think they were a genius because the computer can think more effectively about chess than you can.

Now imagine that the men on the chessboard get up and tell you where to go. In other words, a situation where all the rules are broken. That’s higher order thinking in the humanities. Can a machine ever offer that kind of instruction without a real person responding directly to students in something close to real time? Can a machine teach students to make their own rules?

Perhaps we shouldn’t be asking how effective online education is at teaching, but rather what kind of things do we want online students to actually learn.


Actions

Information

One response

18 11 2013
An automated education is a contradiction in terms. | More or Less Bunk

[…] year, I proposed a Turing Test for judging the effectiveness of online education. If a student can’t tell whether […]

Leave a reply to An automated education is a contradiction in terms. | More or Less Bunk Cancel reply