The worst of the best of the best.

8 07 2014

During the early 1870s, the American industrialist Andrew Carnegie championed something called the Bessemer process, a new way of making steel that was only about fifteen years old at that time. As a result he could make more steel at a cheaper price than any of his competitors. He then took the profits from his initial success and plowed it back into the business, investing in other cutting-edge technologies and buying out his rivals who didn’t fall out of the market naturally. By the early 1890s, he was the owner of the largest steel company in the world. I’d argue that the principle that made this story possible is the first-mover advantage.

That’s not a statement about making steel, which of course had been going on for centuries before Carnegie came along, but with respect to the Bessemer process. Building steel plants was expensive, but Carnegie (who actually made his initial money working for the Pennsylvania Railroad), had the money to invest in pricey new technologies while his competitors didn’t. His steel was cheaper, more abundant and actually of higher quality than the other steel available on the market at that time. No wonder he got so rich.

Mark Zuckerberg’s wealth is much harder to explain. I know there were other social networks before Facebook, but I’m not exactly sure what made Facebook the one network that people absolutely had to join. Certainly, at some point it reached a certain critical mass of people that newbies came to believe that they’d be missing out if they weren’t on it.  Unfortunately, to me the ultimate problem with Facebook is that it actually impedes meaningful interactions with your friends rather than helps it by doing crazy stuff like conducting experiments upon you without your knowledge. That’s why I collected the e-mails of all my friends that I didn’t have already and got out. Facebook has shot whatever first-mover advantage it had to Hell.

As anybody who’s studied MOOCs in the slightest can tell you, the Stanford Computer Science department did not invent them – those nice Canadians did. However, it was those Stanford people who first decided to treat MOOCs as a market rather than as an educational opportunity. I can’t remember which came first, Coursera or Udacity, but there’s no question that Coursera at the very least has gone to great lengths to take advantage of its opportunity to be an early mover, putting MOOCs up regardless of quality – essentially beta testing them in front of tens of thousands of people – because they care more about quantities of students than they do about the quality of the educational experience they were providing.

Think I’m being unfair? Sebastian “lousy product” Thrun essentially admitted this to Fast Company last year. With respect to Coursera, do you remember the Google Document that brought down that online learning MOOC? The superprofessor who quit his MOOC in the middle of the course? How about the “no right answers” guy? These were not, as the MOOC Messiah Squad always like to put it, “the best of the best.” The people running these MOOCs were the worst of the best of the best, which actually turns out to be pretty darn bad in some cases. Well, I hate to judge a situation solely through its news coverage, but it looks like we have a new entrant in this particular Hall of Shame. From the Chronicle:

A massive open online course on making sense of massive open online courses caused massive confusion when the course content was suddenly deleted and the professor started writing cryptic things on Twitter. The MOOC, called “Teaching Goes Massive: New Skills Required,” was taught by Paul-Olivier Dehaye, a lecturer at the University of Zurich. Offered through Coursera, the course had been conceived of as a meta-MOOC designed to help disoriented educators find their feet in the online landscape. The course “grew out of the author’s experiences as an early adopter and advocate of newer technologies (such as Coursera) for online teaching,” according to a description on Coursera’s website. So far, the course has produced chaos rather than clarity. All the videos, forums, and other course materials mysteriously vanished from the website last week. As students in the course grappled with the bizarre turn of events, Mr. Dehaye offered only vague, inscrutable tweets.

And here’s some of the IHE coverage, which begins to explain the reasons for this weirdness:

“[Dehaye] appears to be conducting a social media/MOOC experiment in the most unethical manner,” the student said in a post that is currently the most viewed on the forum. “In my opinion his behavior is discrediting the University of Zurich, as well as Coursera.”

As the mystery captivated the post-holiday weekend crowd on Twitter, more details about the potential experiment were unearthed. Kate Bowles, a senior lecturer at the University of Wollongong, found Dehaye’s name attached to a 2003 paper in which the authors calculated how 100 people could escape from imprisonment when their only means of communication was a lightbulb.

Bowles also found what appeared to be Dehaye’s contributions to the community blog MetaFilter.

“I would be interested to hear people’s opinions on the idea of using voluntariat work in MOOCs to further research (in mathematics, particularly),” Dehaye wrote in one post. “Would this be exploitative? What would be good reward systems? Fame, scientific paper, internship?” He later shared his plans to teach the MOOC, and in response to a thread about the Facebook experiment, wrote “it is hard to pass the message on [C]oursera that emotions are important in teaching but that expressing those emotions can lead to data collection.”

Picking on the superprofessor here seems very, very easy, so I’d rather wonder what responsibility Coursera has for this disaster. I’m sure they’d tell you that picking instructors is the job of their partner universities, but Coursera still has to approve the courses. What standards do they use to decide who really is the best of the best? Judging from the failures I’ve listed in this post, not too many.

Coursera, in its search to attract eyeballs, has forgotten that education is not like steel – or at least steel rail.*  Quality matters. Frankly, I’d take any adjunct professor with ten years experience and put them in front of 50,000 people before I’d do so with any star in their field. After all, adjuncts devote practically all their professional time to providing a better educational experience, in fact their continued employment often depends upon it. Many (but certainly not all) professors at elite universities are too busy doing their own research to care about what’s happening in their own classes. Commercial MOOCs are simply the logical extension of that kind of negligence.

Professors who really care about the quality of education don’t give a damn about the first mover advantage. They’d rather do their jobs well than become famous or conduct massive social experiments on their students without their consent, which probably means that they’d never in a million years work for Coursera.

* The quality of steel actually did matter for later steel products like structural steel for building skyscrapers and armor plate for battleships. I’m just talking about the 1870s here.

Update: I wrote this post so early this morning that I forgot two really important entries into the Coursera Hall of Shame: 1) What I like to think of as the MOOC Forum Circus incident and 2) the truly terrible UC-Santa Cruz MOOC that Jon Wiener described in this article.  And just in case you don’t read Wired Campus (and you should), here’s Steve Kolowich’s truly weird update on the story behind the truly terrible MOOC at hand.

Advertisement




“What goes up, must come down.”

20 06 2014

My knowledge of the contemporary steel industry is a little rusty, but it’s better than you might think. About a decade ago, my poorly-read dissertation was enough to get me invited on as a consultant to two NEH seminars conducted at the Western Reserve Historical Society in Cleveland, OH. Those gigs included tours of the huge Mittal plant there, where we discussed the difference between how steel is made now and how it was made back in the day.

The big difference is that nobody manufactures steel at all anymore in America. The price of scrap steel is so cheap (and his been so for decades now) that just about every blast furnace in America* has been taken down and replaced with electric arc furnaces that melt old steel so that it can be recycled into new products. Just last week, I got a chance to tour the arc furnace building here at Evraz in Pueblo for the first time. They have “recipes” which they use to turn various kinds of scrap into first-rate finished products. I don’t think anybody bothers to call plants like this one “minimills” anymore. In reality, they are old plants (the one in Pueblo dates from 1906) that have been retrofitted for new market realities.

All this serves as background for my reading of the primal scream of an interview that Clayton Christensen just gave Business Week in response to the Jill Lepore takedown of disruptive innovation in the New Yorker. Here’s the part about the steel industry:

[Disruptive innovation] is not a theory about survivability. I’d ask [Lepore] to go see an integrated steel company operated by U.S. Steel. Seriously. And come back with data on, does U.S. Steel make rebar anymore? No, they’ve been taken out of rebar. Do the integrated steel companies like U.S. Steel make rail for the railroads? No. Do they make rod and angle iron, Jill? No. Do they make structural steel I-beams and H-beams that you use to make the massive skyscrapers downtown, does U.S. Steel make those beams? Come on, Jill, tell me! No!

So what do they make? Steel sheet at the high end of the market. The fact is that they make steel sheet at the high end of the market, but have been driven out everywhere else. This is a process, not an event.

For all I know, Christensen may be right about what U.S. Steel makes (although there must be a hell of a market for high-end sheet steel if they can dominate the entire steel industry from that niche). I can, however, tell you this: Evraz makes rail, and apparently they’re the market leader. And they do this from a plant that looks like Hell from the outside, but is obviously incredibly high-tech from within. In other words, this particular facility (under a different owners) caught up in its battle against the slash-and-burn “innovators” of the steel industry. And they’re still a union shop!!!

But you don’t even have to innovate in order to keep up. I heard a story in Cleveland that still haunts my dreams. Apparently, there was once a rolling mill there that the owners sold to a Chinese firm. That Chinese firm took that mill, disassembled it piece by piece, reassembled it in China and starting making steel there again. They couldn’t make money from it in America because of labor costs and environmental regulations, but the old technology still worked fine in China. What’s standing at that site now? A Walmart, of course.

Unlike Lepore, who did the research across industries and got it fact-checked, I’m not saying that Christensen is full of it. What I am saying is that the situation in the steel industry is clearly much more complicated than he has suggested. Unfortunately, complicated stories don’t sell books to businessmen. Complicated stories don’t get you expensive speaking gigs. Complicated stories don’t make you the darling of Silicon Valley. Simple ones do.

“What goes up, must come down,” is probably too simple to explain what’s happening to Christensen at this moment, but it’s the best I can do at the moment so I’m sticking to it. If anybody out there would like to revise my revisionism, please be my guest. After all, that’s how scholarship is supposed to work.

* Nobody was sure whether U.S. Steel still made its own steel in Gary, IN anymore. That’s a harder question to answer than you might think. Mittal maintained the capacity to make its own new steel, but when I was there they told me that they almost never used it.





Disruption disrupted.

17 06 2014

I never took a course in the history of technology. My dissertation (and very poorly read first book) were about labor relations in the American steel industry. While overdosing on industry trade journals, I quickly realized that how steelworkers labored depended upon how steel was made and that the best way to distinguish what I was writing from the many studies that had come before was to get the technological details right.

This proved to be a terrible strategy. While I’m quite sure that I did indeed get the technological details right, the people who read my manuscript never recognized this since they had all read or written books that got them wrong or never covered them at all. The worst comment I ever got (which, of course, I remember to this day) was “Rees knows nothing about the technology of the steel industry.” I begged to differ, but what could I do about it? Nothing.

I wrote Refrigeration Nation because I enjoyed reading old trade journals to get the details right and because I wanted to examine the technology of an industry that nobody else had written about. Surprisingly, when I picked my second book project that description included the refrigeration industry. Actually, refrigeration is not one technology, but many: ice harvesting equipment, large scale industrial refrigerating machines, electric household refrigerators and others. If you read the book (and I certainly hope you do), you’ll see I spill the most ink writing about the transitions between one technology and another.

These transitions can be painfully slow. Ice harvesting didn’t die until around World War I. The ice man still delivered machine-made ice door-to-door in New York City during the 1950s. Even today, you can still buy what is generally known as “artisan ice” for people who really want their drinks to be special. Perhaps this explains why I’ve always been so suspicious of Clayton Christensen’s theory of “disruptive innovation.” Everything I’ve ever studied that you’d expect to disappear in the blink of an eye when in competition with better technology always managed to hold on for decades.

By now, you’ve probably already read Jill Lepore’s absolutely devastating takedown of disruptive innovation in what I presume is this week’s New Yorker. [It appears rather late in my neck of Colorado. Thank goodness this one is outside the paywall!] If you still haven’t let’s just say that Lepore is unimpressed by the work of her Harvard colleague:

Disruptive innovation as a theory of change is meant to serve both as a chronicle of the past (this has happened) and as a model for the future (it will keep happening). The strength of a prediction made from a model depends on the quality of the historical evidence and on the reliability of the methods used to gather and interpret it. Historical analysis proceeds from certain conditions regarding proof. None of these conditions have been met.

And remember, there’s plenty of excellent evidence for the pace of technological change in countless American industries. You’ve never read an Alfred Chandler takedown because Chandler actually consulted this stuff. Christensen apparently not so much.

Since I don’t have a team of fact checkers at my disposal, I’m just going to concentrate here on the industry Lepore covers that I know best: steel. Here’s Lepore:

In his discussion of the steel industry, in which he argues that established companies were disrupted by the technology of minimilling (melting down scrap metal to make cheaper, lower-quality sheet metal), Christensen writes that U.S. Steel, founded in 1901, lowered the cost of steel production from “nine labor-hours per ton of steel produced in 1980 to just under three hours per ton in 1991,” which he attributes to the company’s “ferociously attacking the size of its workforce, paring it from more than 93,000 in 1980 to fewer than 23,000 in 1991,” in order to point out that even this accomplishment could not stop the coming disruption. Christensen tends to ignore factors that don’t support his theory. Factors having effects on both production and profitability that Christensen does not mention are that, between 1986 and 1987, twenty-two thousand workers at U.S. Steel did not go to work, as part of a labor action, and that U.S. Steel’s workers are unionized and have been for generations, while minimill manufacturers, with their newer workforces, are generally non-union. Christensen’s logic here seems to be that the industry’s labor arrangements can have played no role in U.S. Steel’s struggles—and are not even worth mentioning—because U.S. Steel’s struggles must be a function of its having failed to build minimills. U.S. Steel’s struggles have been and remain grave, but its failure is by no means a matter of historical record. Today, the largest U.S. producer of steel is—U.S. Steel.

Two other factors that Lepore doesn’t mention (which makes me think that Christensen didn’t either) are environmental regulation and foreign competition – the second being the more important of those two to the overall fate of the industry. The success of minimills also required a huge decrease in the price of scrap steel. What these other factors suggest is that any hard and fast rule of technological change will inevitably fall victim to the unpredictability of people. My old advisor used to call this the social system of production, and practically the entire subfield of the history of technology is predicated on this notion rather than Christensen’s brand of technological determinism

For example, if I remember right, Chandler’s last book (I get the titles mixed up) is about the various quirks in the path of industrialization across international borders. In my work, the most important factor determining the speed at which one refrigerating technology transitions to another is its reception by consumers and amazingly enough lots of refrigeration consumers just hate “progress.” Just to namecheck a great book that I happen to be reading right now, in Seeing Underground, Eric Nystrom describes the effect of political factors – especially lawsuits – on the quality of mine maps. In Butte, Montana, at least, the more lawsuits there were the more precious metals they eventually found.

Of course, my interest in Christensen comes from his pronouncements about higher education. Lepore does very little with them in her article, but that shouldn’t stop anyone from applying the same logic that I just did here. There is no scientific law of the jungle that fates universities to go entirely online or die off. If people value direct human contact and the educational advantages it brings, they should be willing to pay – or force their governments to pay – for universities to teach in face-to-face settings. Like I wrote in Inside Higher Education a really long time ago now, all this talk about inevitability is just a way to shut down discussion so that the educational traits that we once valued will be abandoned more easily.

The great service that Lepore has performed is to metaphorically take the fight over those values to the source of the attacks against them. Like MacArthur at Inchon, she has landed behind enemy lines and will hopefully force the enemy to pull back and defend ideological territory that they thought they had already conquered. Those of us currently at risk of becoming victims of creative destruction can only hope she succeeds.





The (edtech) hippie revolt.

23 04 2014

One of the strangest rabbit holes I’ve gone down as a result of all this edtech blogging is the association between early developments in computing and the counterculture of the 1960s. While I’m not going to try to explain it here, you can read a really good summary of those links in John Markoff’s What the Dormouse Said from 2006. I actually assigned that book this semester and am currently pursuing those links with students in my 1945-Present class for the first time.

My students final paper assignment asks them to compare Markoff’s book to Michael Lewis’ The New New Thing and look for continuities between the 1960s and the 1990s. Much to my amazement, we found at least nine or ten of them while exploring possible paper theses the other day. I think it helps that Lewis’ book is now just as dated as the history that Markoff covers (which actually makes it better for use in history classes than when I first started assigning it!).

If you’re wondering whether I might have brought this paper topic up to the present, the answer is “no.” Every last inkling of the Hippie Revolt has dissipated from Silicon Valley. How do I know? Check out this note which my friend Historiann got from her administration up in Fort Collins the other day:

This seminar will provide information about the university’s involvement in a national consortium that promises to enhance learning and teaching. The consortium, which includes several leading research universities, is exploring new directions in the use of instructional technologies. The intent is to facilitate and accelerate digital learning using the best integrated digital systems available that make it easy for faculty and enhance learning. The ecosystem consists of three components: a digital content repository/reflector, a service delivery platform, and a learning analytics service. The digital content repository/reflector will allow us to regain control over our digital learning objectives, allow faculty to choose to share/reuse digital content easily and seamlessly while preserving their digital rights. The service delivery platform is Canvas by Instructure, and has the characteristics of easier use by faculty and faster development of courses in it. The best learning analytics will be deployed and evolve apace as this area develops.

Historiann was rightfully flustered by this terrific example of edtech gobbledy gook. [My favorite word in it is “ecosystem.” What’s yours?] I’d try to translate for her, but what’s the point? That would be playing the game on their home field in a struggle that we faculty are bound to lose.

Instead, let me suggest an alternative strategy: Think outside the box. If administrators and for-profit edtech concerns want to colonize our educational turf, then move the playing field. The easiest way to do that is what I’m pretty sure Historiann’s response is going to be: Don’t use their commercial learning management system and don’t teach online.

But even people interested in using more online tools than Historiann don’t have to surrender control of their classrooms to “The Man.” As I wrote in the Chronicle Vitae piece linked to above, Jim Groom, who blogs at Bava Tuesdays and who remains my hero, is working on a project to facilitate and teach faculty members to control their own domains, Reclaim Hosting. I, for one, want to learn how to use technology to teach history better, but I HAVE TO BE THE ONE WHO DECIDES WHAT CONSTITUTES “BETTER.” After all, I’m the one with all that teaching experience, not our administrators and not the techies who work for our LMS provider.

Does this position make me a hippie? Good. [Insert obligatory legal weed in Colorado joke here.] I think educational technology could use a lot more hippie and a lot less revolt – at least revolts of the unnecessarily disruptive kind. Don’t you?





“Domo arigato, Mr. Roboto.”

7 04 2014

Good news everybody!  Robots will only replace SOME us at our jobs by 2034, not all of us.  Who’ll be safe?  As the Huffington Post explains part of it:

Human social intelligence is critical for those professions that involve negotiation, persuasion, leadership or high touch care. Those positions demanding high social intelligence tasks might include public relations specialists, event planners, psychologists and CEOs.

Does that include university professors? You’d hope so, but that would force the people in control of universities to actually respect the quality of the education they produce and I’m not sure we can trust most of them to do that. The corporatization of higher education over the last forty years strongly suggests that most of them would rather treat education like any other manufactured product.

If education were a real factory problem this transition might actually be an improvement. It’s not just that robot arms never get tired or ask for a pay raise. They can work with greater precision than even the best skilled craftsmen. I’ve toured the steel mill on the south side of Pueblo, Colorado many times now. While 10,000 people used to work there during WWII, fourteen people can handle a shift in a building the size of several football fields rather easily now. [And even then, a few of them are just waiting around in case something goes wrong.] Foreign competition, pensions, environmental regulations aside – the payroll in that plant would have gone down over the last fifty years just because of automation. Furthermore, the steel they produce there might actually be better as a result.

Can you say the same thing with a MOOC? The New York Times Magazine makes an argument for the effects of automation on workers in general that reminds me a lot of the argument for MOOCs:

Man invents a machine to make life easier, and then that machine reduces the need for man’s work. Ultimately, it’s a virtuous cycle, because it frees humans up to work on higher-value tasks.

Flip your classroom with the latest MOOC, spend more time in class teaching one-on-one. Everybody wins, right? Only if you completely ignore the class politics that surround labor-saving machinery of all kinds. Nick Carr, explains this point here far better than I ever could:

The language that the purveyors of the endless-ladder myth use is fascinating. They attribute to technology a beneficent volition. The technology itself “frees us up for higher-value tasks” and “propels us into more fulfilling work” and “helps us to expand ourselves.” We just need to “allow” the technology to aid us. Much is obscured by such verbs. Technology doesn’t free us or propel us or help us. Technology doesn’t give a rat’s ass about us. It couldn’t care less whether we have a great job, a crappy job, or no job at all. It’s people who have volition. And the people who design and deploy technologies of production are rarely motivated by a desire to create jobs or make jobs more interesting or expand human potential. Jobs are a byproduct of the market’s invisible hand, not its aim.

If you think most administrators give a rat’s ass about whether there’s a human being or a robot at the front of the classroom then you haven’t been paying attention.





MOOC sublime.

15 03 2014

“The steamboat sublime took expropriation and extermination and renamed them ‘time’ and ‘technology.’ From the vista of the steamboat deck, Indians were consigned to prehistory, the dead-end time before history really began, represented by the monuments of ‘remote antiquity’ that lined the river’s bank.

The confrontation of steamboat and wilderness, of civilization and savagery, of relentless direction with boundless desolation, was called ‘Progress.'”

– Walter Johnson, River of Dark Dreams: Slavery and Empire in the Cotton Kingdom (Cambridge: Harvard University Press, 2013), 76-77.

Barbara Hahn of Texas Tech University is one of my very favorite people in all of Academia. We not only share similar interests and the same publisher, she is also a very, very good historian. As proof, I offer this from a new AHA Perspectives article intended to introduce other historians to the history of technology as a sub-field:

[A] difficult-to-shake belief in technological determinism—the idea that tools and inventions drive change, rather than humans—is widespread. When apps download on their own, or when cellphones appear to inspire texting over talking, it certainly feels as if technology changes and humans simply react. But most research into the history of technology undermines this widespread assumption. Technology itself has causes—human causes. If it didn’t, it would have no history. So the field by its very existence fights common misconceptions about technology.

Of course, the first thing I did after reading this article was to apply its lessons to MOOCs. Did MOOCs emerge fully grown out of Sebastian Thrun’s head? Of course not. They have both a history and a pre-history. While I’m not qualified to explore either of those subjects in any depth, I do want to explore the question of what a MOOC actually is from a technological standpoint so that others might have an easier time explaining that history.

Again, Barbara’s article can help. “What is technology?,” she asks:

Even experts struggle to fix its boundaries, but a modest definition will suffice to begin inquiry: technology is the systematic, purposeful, human manipulation of the physical world by means of some machine or tool. In this definition, technology becomes a process, rather than the artifact that process employs.

MOOCs, of course, employ a variety of technologies to achieve their goals, and since no MOOC is exactly alike (see Rule #2), the kinds of technology they use will be different. Video recording is one MOOC technology. A forum is another one. Some MOOCs use Google Hangouts. Others don’t. What they all have in common is the Internet as their base infrastructure, but since so many other things depend upon the Internet for their existence these days, I’d argue that that similarity obscures more than it illuminates.

As a student of the history of technology myself, I’d argue that what every MOOC has in common is a story to hold the diverse technologies that it employs together. Daphne Koller’s story involves bringing education to the undeveloped areas of the world. The story that all those nice Canadians tell involves students helping other students learn. The best I can tell, the story behind DS106 involves barely controlled anarchy (which might explain why it’s my favorite MOOC out there by far).

Listen to enough of these stories and you begin to detect patterns. What their proponents emphasize tell you what they think is important, but the opposite of that thought is true as well. What their proponents leave out tell you what narratives of MOOC progress discount or ignore altogether. Here’s a summary of a paper called, “Do professors matter?: using an a/b test to evaluate the impact of instructor involvement on MOOC student outcomes,” which I’m pulling from the blog Virtual Canuck:

The study concluded that teacher presence had no significant relation to course completion, most badges awarded, intent to register in subsequent MOOCs or course satisfaction. This is of course bad news for teacher’s unions and those convinced that a live teacher must be present in order for significant learning to occur.

Well let’s kill all the teachers then!!! What’s that you say? Probably not a good idea? I happen to agree, but if all you’re measuring is badges, course completion and MOOC satisfaction then this kind of conclusion makes perfect sense. Learning, or at the very least the learning process, has been obliterated by the structural sacrifices that MOOC creation entails.

Another part of the learning process that disappears in the xMOOC story is the direct interaction between the professor and the student. You just knew I was going to get to this particular MOOC news nugget eventually, didn’t you?:

An English professor at Harvard University turned heads last month when she instructed students in her poetry class to refrain from asking questions during lectures so as not to disrupt recordings being made for the MOOC version of the course.

Elisa New, a professor of American literature, instituted the policy at the behest of technicians from HarvardX, the university’s online arm, according to The Harvard Crimson, which first reported the news. The video technicians reportedly told her they wanted to record a continuous lecture, with no back-and-forth with students.

Of course, professors play an oversized role in the xMOOC story, but what this wonderfully symbolic anecdote shows us is that the process of teaching doesn’t. If anybody fails to understand this superprofessor’s lectures, in class or in the MOOC, they are just S.O.L. This shows that what we used to think of as teaching is being replaced by mere content provision in this new narrative, which I think I’m going to start calling the MOOC sublime.

In Walter Johnson’s version of steamboat sublime, “Progress” rendered Native Americans invisible. In the MOOC sublime, the people who disappear are the faculty members who choose to cling to the outmoded, inefficient mode of instruction that so many MOOCs aim to replace. Who cares if we use actual technology ourselves? As long as we fail to board the MOOC train before it leaves the station we are expendable.

How do you fight this kind of passive/aggressive, often self-interested narrative attack? I think we alleged Luddites need to come up with a story of our own in order to help us resist the fate that the edtech entrepreneurs of Silicon Valley have in store for us. I guess this post is my shot at doing so. Any additional details in the comments below would be much appreciated. After all, so many of our jobs may depend upon how well we can all tell it.





Nothing is inevitable.

11 03 2014

One of the great things about blogging is that you literally have no idea who might stop by in the comments. When I first assumed my role as “Self-appointed Scourge of All MOOCs Everywhere,” somebody famous in MOOC circles might stop by and I wouldn’t have the foggiest clue who they are. Thanks to the famous Bady/Shirky debate of 2012, I know exactly who Clay Shirky is. While I’m still on Team @zunguzungu, I must say it’s quite an honor to have somebody with 301,000+ Twitter followers stop by the comments of this post and write enough material to merit a post of his own.

Another great thing about blogging is that you can move long conversations in the comments into a new post if you’re so inclined. I am so, here it is. Before I start getting into details though, let me just start by noting that I wasn’t trying to somehow summon Clay Shirky by writing, “Your Historical Analogies Are Bullshit.” If you notice, he wasn’t even first on my list of examples later in that paragraph. In fact, that point wasn’t even relevant to the news article that originally set me off.

What happened was that I had just been teaching Tom Sugrue’s classic Origins of the Urban Crisis for the first time, and rereading this passage (p. 11) reminded me that nothing is inevitable:

“The shape of the postwar city, I contend, is the result of political and economic decisions, of choices made and not made by various institutions, groups, and individuals. Industrial location policy is not solely the result of technological imperatives; it is the result of corporate policies to minimize union strength, to avoid taxes, and to exploit new markets.”

You don’t even have to change that many words in order to make that caution relevant to higher education. Nevertheless, MOOC-ology thrives because it assumes that we are already well down the path of progress to a techno-utopian future that nobody can ever stop.

Unlike most edtech reporters, Clay Shirky at least gives us a lot of analysis to go with this narrative. Here’s a big chunk of his first comment (please do go back and read the whole thing though if you are so inclined):

The point of the comparison is not that MOOCs are Teh Future — indeed, in my original post on the subject, I specifically assumed that MOOCs, as constituted, could fail outright, as Napster did.

Instead, I made the analogy in order to suggest that what happened to us in 2011 is like what happened to the recording industry in 2000, which is the collapse of the incumbents to convince the public that there is no alternative to the current way of doing business. So let me make a prediction based on that analogy: there will be more movement in state legislatures in the next 5 years on creation of the $10K BA than on the raising of state subsidy.

Even though faculty are all but unanimous on the idea that university costs and revenues need to be aligned through more generous revenues rather than by reduction in costs, I believe that The Year of the MOOC, already receding, has robbed us of our key asset in making that claim, which was the lack of a credible alternative.

This is, I believe, remarkably similar to the music industry, who achieved a rapid and total victory over Napster and nevertheless lost control of even legal distribution of music, because the public no longer operated in an environment of assumed consensus about how music distribution should work.

To me, this line of reasoning is what we used to call in high school debate “non-unique.” Much of the public is hostile to higher education for both cultural and economic reasons already. Had those nice Canadian people never conceived of MOOCs, we would right now be having a different debate in order to save higher education. You can’t claim that technology has conquered the savage beast when the savage beast is already taken several more-than-glancing blows from many different directions.

Here’s some of Clay’s response to my point (and this one is edited for brevity, so please do check the full comment here to see his ideas in their full context):

The core technology [of the MOOC] is the video lecture, already in its precursor forms with TED and Khan Academy videos; the innovation was to place enough structure around them that they came to feel to citizens like they should count in the same way that other kinds of classes (including online classes) count. The form of the famous 2011 MOOCs — a simplistic beads-on-a-string model of lectures and quizzes, with no social contact folded into the system — was wrong in many of the ways people have noted, but it was right in one big way: it sketched in a model of higher education where more people could complete a single class than attend most colleges, and they could do it for free….

To put it in its most reductionist terms, the 2011 MOOCs changed the world because they offered a compelling enough story for John Markoff to write about. That’s not the same as being the core innovation of any future educational landscape, but as with Napster, sometimes 2 years of counter-example is sometimes enough to destabilize a system.

As I’m sure regular readers are sick to death of refrigeration analogies, let me at least go into a different industry. While I’m not sure I ever footnoted it in my book, Richard John’s Network Nation is to me the model analysis of a dead industry. By all rights, the Post Office should have been dead for over a century now. First the telegraph, then the telephone (and certainly now e-mail), have provided easier, cheap (if not cheaper) and more convenient communication for just about any message you want to convey. Yet the American public has seen fit to subsidize this endeavor to keep the letters coming. Yes, my mail is mostly down to just junk mail these days, but even that serves a purpose that people who tell a long narrative of steady progress refuse to recognize.

While this too may just be an alternative bullshit historical analogy, I make it to highlight the importance of contingency. Clay Shirky offers us an extremely compelling narrative of progress, but progress is based on countless contingencies. Yes, all of the points in a historical analogy do not have to match, but they really should point to the same abstract processes. The only abstract process I see in the Napster analogy is inevitable defeat, which I refuse to believe is inevitable. In the end, the point of this analogy is to tell faculty like me to let the warm water wash over everybody, even if those waters are high enough that most of us will drown.

Call me naive, but I can see a different future. My future is still technologically-oriented but in my future it’s faculty, not administrators or private companies, that control the technology of higher education. How do we achieve my particular techno-utopian future? By asserting our pedagogical expertise rather than by farming out to untrained amateurs.





Groundhog Day.

7 03 2014

I think I know how Phil Connors felt. Or maybe it’s how Audrey Watters feels every day. The longer I keep reading stuff about MOOCs, the more I feel like I’m caught in some kind of bizarre time warp. For example, something about this article just drives me crazy:

Imagine the potential of MOOCs! they claimed—universally obtainable college classes taught to millions of learners by cream-of-the-crop professors for free or very low cost. The democratization of elite education! Some even predicted that MOOCs—now boasting more than 10 million students and thousands of classes—would do nothing less than revolutionize higher education, making residential colleges obsolete in the process.

But all that rises soon must fall.

Negative critiques began mounting—from longtime educators, faculty unions and watch guards of traditional pedagogy. Many said the MOOC phenomenon was, at its core, a threat to brick-and-mortar colleges and an affront to the traditional purveyors of higher education. After early data showed that some 90 percent of MOOC students drop out before completing courses, critics declared proof of failure. Just a few months ago, critics almost cheered when a study by the University of Pennsylvania’s Graduate School of Education found that MOOC student engagement falls off an even steeper cliff shortly after each class begins, with course-completion rates averaging more like 4 percent. Meanwhile, it was also becoming clear that most of those who completed MOOCs were already highly educated, decidedly motivated. Slate published a blistering critique, NPR aired a negative take, and other headlines across the country took up a new pessimistic chant with: “Are MOOCs already over?” (The Washington Post), “Are MOOCS Really A Failure?” (Forbes), “All Hail MOOCs! Just Don’t Ask if They Actually Work” (Time).

Don’t get me wrong here: “MOOCs are up, MOOCs are down” is a lot better than “MOOCs are about to take over the world.” And certainly, I agree with everything Mark Brown (the designated anti-MOOC spokesperson for this article) says in the parts that I haven’t excerpted (and whose blog is the reason I saw it in the first place). I think my problem is that people have been doing these “Introduction to MOOCs” articles for at least two years! Can we pleez haz sum analysis now?

Therefore, in the spirit of helping reporters everywhere get over the learning curve and to prevent me from developing a need to rob a bank, commit suicide or punch Ned Ryerson in the face, I have developed the following six rules for anyone writing about MOOCs, whether they’re doing so for the first time or whether it’s their daily beat:

1. MOOCs and online learning are not the same thing. If you can’t tell the difference between a MOOC and a regular online class, you should go back to reworking Chamber of Commerce press releases for the local free paper distributed at the shopping center. Stop reading now. Do not pass go. Do not collect $200. I don’t even want to talk to you, let alone try to explain stuff this obvious.

2. There is more than one kind of MOOC. Of course I’m referring to the difference between xMOOCs and cMOOCs, but really there’s even so many more kinds than that. There are corporate MOOCs and non-corporate MOOCs. There are MOOCs that last fourteen weeks and MOOCs that last six days. There are MOOCs that take themselves seriously and MOOCs (like that “Walking Dead” MOOC) that don’t. Writing as if all MOOCs are the same does a tremendous disservice to the different kinds of innovations that have been taken up under the term that all those nice Canadians coined originally to mean a very specific kind of class. I try very hard on this blog to be very specific about what kinds of MOOCs I’m criticizing (and very occasionally) what kind of MOOC I’m praising. Reporters should too.

3. Teaching computer science is not the same thing as teaching literature. Geez, this should go without saying, but it clearly doesn’t. Just because you flipped your pharmacy class successfully with MOOC lectures doesn’t mean that my history class can do the same thing. Indeed, just because you flipped your pharmacy class doesn’t mean that all pharmacy professors can achieve the goals that they want their students to meet by using MOOCs. I’m sick and tired of superprofessors getting up on their high horse and declaring that they’ve solved every educational problem for all time when the best they could possibly hope to achieve is to find an educational format that works for the unique circumstances facing them and their students.

4. Faculty attitudes towards MOOCs are best expressed by a spectrum, not by two warring factions. Do I hate MOOCs? No. As I’ve written elsewhere, “I’m not actually against everything.” Do I hate top down administrative control of technology and pedagogy? Yes. Do you know what I really love? Faculty autonomy. Really, we all just want the right and the resources to do our own thing. Let a professor make his own MOOC and implement it on his or her terms and you won’t hear a peep out of me (as long as you doing your own thing doesn’t impinge on others doing their own thing too).

5. Your historical analogies are bullshit. Higher ed is like the newspaper industry! Higher ed is like the early film industry! Higher ed is like the record industry! These are not objective musings that come as a result of historical research, but, as I wrote a long time ago now, these analogies are really just heavy-handed attempts to shut down discussions about the effects of technological change so that a select group of people can profit from it. This particular kind of technological determinism requires ignoring the very subjective ways that politics, power and culture shape changes in technology over time. In a way, I guess the point of all my MOOC blogging has been to do precisely the opposite in an edtech context because you aren’t ever going to read Clayton Christensen, Clay Shirky or Daphne Koller do anything of the sort.

6. The revolution will not be televised. The revolution will not be Chronicle-d. It will not be Inside Higher Education-ed. It will not be Techcrunch-ed. It will not be Pando Daily-ed. In fact, most technological revolutions happen at such slow speeds that nobody ever notices them.

Consider this bullshit historical analogy of my own: The electric household refrigerator first appeared in American homes during the early 1920s. A majority of American homes had a refrigerator in them sometime during WWII. Yet, while working on my book, plenty of people told me that their houses had iceboxes or that their houses had ice delivery men come to their door during the late-1950s and early-1960s. How could this be? Refrigerators were cheap, and so convenient! The answer is simple really: Markets are not perfectly efficient and plenty of people resist change for a wide variety of reasons, including ones that have nothing to do with money. In other words, my bullshit historical analogy demonstrates why all historical analogies are bullshit, at least in a technological context.

If you can think of another rule that I’ve forgotten, please leave it in the comments below.





MOOCs are in Joan Rivers, but they’re trying to get out.

22 10 2013

During the 1840s and early-1850s, American ice harvesters tried to sell their product in Great Britain for the first time. It was the technological marvel of its day – clear ice cut from lakes and ponds in New England shipped intact across the ocean. When you think about it, it’s still an impressive technological feat. Cut ice in regular blocks and pack it in a ship, the blocks separated with sawdust like mortar in a brick wall and 50-75% of the ice will still be intact when you take it to the other side of the world.

London had never seen anything like it. One supplier displayed a block in their window on the Strand. People would stop and gawk on it (not realizing that they would replace the block as it melted). That same supplier convinced Queen Victoria to endorse their product. Another brought over American bartenders to make “American” iced drinks. Unfortunately, the hype couldn’t sell enough of this novel product to keep the market afloat.

While natural ice was a sensation with the upper crust for a few seasons, the product never penetrated the middle or lower classes. Cost explains that result to some extent, but so does culture. The British just didn’t much care for iced drinks. For decades, the only place you could buy ice in England was at the fishmonger, where they used it to display their catch. It’s been seven years since the last time I was in England, but I remember it was next to impossible to get ice cubes there even then. Because I didn’t want to be an Ugly American i just stopped asking.

Why am I writing about ice cubes in what’s clearly a MOOC post? Well, there’s the fact that I’d much rather be promoting my book than going to this well yet again, but this piece really did make me think of the longstanding British distaste for ice cubes:

How do critics expect a MOOC to simply come in and present itself as a viable and legitimate replacement as a signal of student competence against some of our most revered and trusted institutions? Harvard, Yale, and Princeton opened their gates in 1636, 1701 and 1746. I daresay that it is asking a tad much of this nascent experiment to eclipse the prestige of these institutions after a meagre few years.

Harvard, Yale and Princeton, bless their hearts had paying students right from the beginning. MOOCs, alas for the techno-utopians among us, have no business model to speak of at all. American ice providers would have loved a few extra years in order to convince British people to consume cold drinks, but Mean Mr. Market didn’t give any to them. I think the same thing will be true for MOOCs, no matter how successful their experiment happens to be.

While I think we have enough evidence to pronounce MOOCs a pedagogical failure (if not a business one), the author of this piece has a much rosier view of education technology. If you’re stomach is strong enough to click this link, you’ll see that it’s response to Sarah Kendzior’s recent Aljazeera piece on MOOCs, “When MOOCs Profit, Who Pays?” Luckily for me, Sarah Kendzior is more than capable of taking care of herself so I have no need to violate my pledge not to rehash arguments from the “Year of the MOOC” that I’ve been over 1,000 times before in this space.

Nevertheless, there is something new and different here. The faith-based manner in which the author accepts arguments of the Masters of MOOC Creation is almost touching in its naiveté. That’s why a long excerpt is in order:

The very notion that MOOC providers are wedging income groups further and further apart is laughable after just a cursory read of the quixotic and lofty aims that their founders propagate. To say that MOOCs are an accomplice to the hardships suffered by students because of the tortured state of higher education is to fail to understand what one actually is and why the mode came into being.

Their founders talk of goals such as bringing the highest quality education to the remotest parts of the world, to offer students the same level and depth of instruction, irrespective of their financial or ethnic background. How can a concept so fundamentally egalitarian and open be accused of creating educational inequalities? MOOC providers can boast stories of their courses giving new leases on life to Syrians suffering the tolls of war and giving humanitarians new tools to inform their field work. Is this not the exact opposite of increasing inequality? And given that MOOC providers have not the ambition nor aspiration for their platforms replace the institutions of university, there is no immediately conceivable possibility of a two-tiered education system arising as a result of their existence.

This is pure faith-based education reform if I’ve ever seen it. The author sees the potential for helping suffering Syrians and therefore assumes that all of us must accept one and only version of the potential future so that those Syrians can get their MOOCs. If the people being helped can’t actually pay for their MOOCs, then American college students have a duty to propel this experiment forward.

If I see Elvis everywhere, does that mean we need to go to Graceland and dig up his body just to make sure he’s really dead? It reminds me of one of those charities that’s all smiles in their infomercials, but 95% of the donations go to the founder’s bank accounts. We’re helping because we say so. Period. End of story. Don’t follow the money or you’ll hurt the people we’re helping.

MOOCs, in short, have become all things to all people. For the naive techies of the world, they will end inequality. To investors in Udacity and Coursera, they will hopefully make enough to aggravate it. To superprofessors, they will bring quality learning to the masses. To the retired physics professors of the world who take every MOOC in sight, they’re more of an opportunity for entertainment that beats whatever is on television. All of this is a product of the fact that MOOC providers have absolutely no idea what their market even is. Unfortunately for them, they’ll have better luck bringing Elvis back from the dead than they will satisfying all these constituencies at once.





Why you should buy my book or the refrigeration blogging begins.

23 09 2013

During the 1990s, the fourth floor of the Engineering Library at the University of Wisconsin – Madison had shelves lined with old trade journals. When you got off the elevator, the volumes directly at eye level were called “Ice and Refrigeration.” I was working with unbelievably old copies of the journal “Iron Age” back then as my dissertion was about the steel industry. These were not quite so old, but they, like their subject, were almost untouched by human hands (which I knew because I had to separate many of the pages myself). What I found there seemed quite extraordinary.

Once upon time (c. 1900) there was an enormous ice industry in the United States. Huge plants running five-ton machinery would knock out sheets and blocks of ice the size of several people. This ice then got broken up and sold door-to-door by covered wagons in towns and cities across America. I had heard of the cutting of ice off lakes and rivers around New England before this. From there the ice got transported to and sold in places as far away as India. But this was something else entirely! Here was an enormous, historically-significant completely dead industry, untouched in the historiography. I started my research trying to explain why these plants seemed to burn down so much. After all, they were ice plants after all! Technology and Culture published that all the way back in 2005. Then I kept going.

From there, I started reading about all the different segments of this industry and decided I wanted to write a book about how one technology passed into another: natural ice to mechanical refrigeration to home refrigeration, iced refrigerator cars to mechanically refrigerated railway cars to refrigerated shipping containers, iceboxes to electric household refrigerators and many more. What I found was that “inferior” technologies you’d expect to go extinct quickly persisted longer than you might ever imagine. The ice delivery man, for example, survived into the 1950s. Ice harvesting with horses actually survived past World War I. This tendency, as you might imagine, has had a huge influence on my MOOC blogging.

The other reason you should read my book is because it’s really great food writing. No, it’s not why cod or the hamburger or the ice cube saved the world, but it covers all these things and more. Basically, if you want to research anything that deals with perishable food, you’re going to have to read this book. After all, the last scholarly publication on this subject was published in the early-1950s. I remain amazed that I spent thirteen years (off and on) writing up this project and nobody beat me to the punch.

So, have I peaked your interest? If so, you can visit the nice people at the Johns Hopkins University Press and get your copy of Refrigeration Nation faster than any online bookseller as theirs are in stock now. Even just recommending it to your local library would make me very happy.








%d bloggers like this: