Going, going…gone?

3 07 2014

Thanks to the great Sarah Kendzior, I picked up a copy of Gone Girl when I was in Target yesterday.  While I’ve barely started it, I can tell it’s going to be really, really good.  What I think made me do it was that until I read Sarah’s piece on the death of malls, I didn’t realize that the book had economic themes, like this quote from the beginning few pages:

“I’d arrived in New York in the late ’90s, the last gasp of the of the glory days, although no one knew it then.  New York was packed with writers, real writers, because there were magazines, real magazines, loads of them.  This was back when the Internet was still some exotic pet kept in the corner of the publishing world–throw some kibble at it, watch it dance on its little leash, oh quite cute, it definitely won’t kill us in the night.  Think about it: a time when newly graduated college kids could come to New York and get paid to write.  We had no clue that we were embarking on careers that would vanish within a decade.”

I love good magazine journalism.  I have an almost religious devotion to the New Yorker since my parents were subscribers.  Yet I also read the Huffington Post and give content away for free at this very blog.  Does that make me a bad person?

Of course, it’s easy to imagine some bitter magazine writer muttering to themselves, “You’re next, professor.”  That, at least, is what the disrupt higher education crowd would like everybody to believe.  Longtime readers know that I constantly go back on forth on the question of whether or not my profession will “vanish within a decade.”  Today, I’m definitely in “I will survive mode.”

What’s improved my mood is an op-ed at Inside Higher Education.  The author, Randy Best, is discussing the pricing model for online classes:

These days, two out of three students attending on-campus programs receive some form of generous subsidy or discount, while their online counterparts, generally ineligible for such assistance, foot the full sticker price even though they do not benefit from all the amenities of the revered campus life, do not take up parking spaces, inflict wear and tear on facilities, or take up as much instructor time. Instead of embracing these online learners who produce considerable incremental revenue for institutions, colleges and universities are penalizing them, which has troubling implications not only for students’ bank accounts, but also for universities’ own vaunted views of fairness. By introducing e-tuition, which is appropriately lower than the on-campus price tag, universities could easily capitalize on the scale, brand extension, and new revenue synonymous with online learning while maintaining far more equitable pricing for online students.

Never.  Gonna.  Happen.  How do I know?  The entire existence of online courses is predicated on the notion that they’re just as good as the courses on campus.  Because offering online courses cheaper than face-to-face courses strongly suggests that these courses are somehow inferior to the courses that are offered on campus, universities will do nothing to discredit their brands.* Besides that, many of them have undoubtedly already spent the money they expect to take in through online courses on new administrators and climbing walls in the gym.

Best’s example of a university that’s bucking this trend is the online computer science MA at Georgia Tech.  If you remember though, this is the MOOC-ish MA in which people spend most of their time watching video, with minimal outside help.  They’re only discounting the sticker price because they’re hoping to make it up in volume and, as Chris Newfield has argued persuasively, their revenue estimates are probably inflated.  That would make this an exception that proves the world.

The problem here is prestige.  A long time ago, when I was in another one of these “I will survive” moods, I wrote a post called “The Walmart of higher education will not be online”:

The assumption that college is too expensive is certainly correct. The problem with this article is that it assumes that online education is the way to solve that problem. As I’ve noted before, it is possible to do some really interesting things with education online. However, if you’re just doing it to save money so that your university can keep more money for other things, your online courses are going to be awful. As a result, nobody will learn anything and they’ll all end up unemployed. The negative feedback loop will then lead to a real crisis in higher education, brought on by the people who thought they were saving it.

Maybe this time I’ll stop changing my mind.  The future is a lot closer than it was when I first wrote that post, and the evidence of discounting still hasn’t surfaced.  What’s going to make them change their minds?

*  Of course, the good online courses aren’t inferior to the ones offered on campus, but if you discount them the market will still likely treat them that way.





“And we can act like we come from out of this world and leave the real one far behind.”

1 07 2014

Mark Cheathem has done me a great favor. He’s written the exact post that I would have written about this Junto interview about MOOCs with the historian Peter Onuf so that I don’t have to repeat myself. Indeed, Mark has provided plenty of links to this blog so that I can make those points myself without writing another word. And while I know Onuf primarily from his excellent work on the wonderful radio show BackStory, I can also second Mark’s respect for his obvious talent as an historian.

So what is there left for me to write here? There’s a part of that interview that can help me make a point that’s been bubbling around the back of my mind for about a week now. This is Onuf:

Let me talk a little about my dubiousness. As for any other self-respecting academic, this seemed suspiciously like a substitution for conventional lecturing. If this was the future it was a future that we looked at with mixed feelings—that this would reinforce the emerging inequality in higher education, which mirrors that of the nation as a whole, with some institutions monopolizing the airwaves, displacing lecturers and teachers, making places like Stanford, Harvard, and MIT the centers of a new era of pedagogy. And that sounded pretty ominous, particularly given that people were being asked to create these MOOCs in their spare or extra time. And you can imagine the scenarios that would play out: “Well, we don’t need you anymore! We got you on MOOC.”

Now that’s certainly exaggerated; it suggests a fundamental bad faith at the level of administration, and I’m not willing to go that far. But I just wanted to say I had mixed feelings.

[Emphasis added]

If I remember the way that the History Guys get introduced on the radio these days, Onuf is retired – or at least retired from regular teaching. [Indeed, he notes later in the interview that Alan Taylor is his successor at the University of Virginia.] This means the cost of his being wrong about the intentions of his administration is exactly zero. Indeed, if you remember, it’s not the administration in Charlottesville who’s faith anyone there should worry about, the problem is higher up. On second thought, even administrations with good faith will do bad things when pressured from above, so really people there have a right to stay worried about everybody.

And so do people elsewhere. Onuf makes a common mistake among superprofessors when he assumes that the people running universities who produce MOOCs are the only people who’s faith he needs to measure. Nobody among us MOOC skeptics is arguing that Alan Taylor is going to be replaced by old Peter Onuf tapes. The people we’re worried about are the community college professors down the street or across the country. If you make a MOOC you have a responsibility to be sure that it is used wisely. Simply letting the chips fall where they may clearly demonstrates that you’ve left the real world far behind – the world of MOOC consumers rather than the world of MOOC producers.

Sadly, you don’t have to be a superprofessor in order to adopt this attitude towards online education of all kinds. Here, in a Google+ posting inspired by my last Chronicle Vitae column, my favorite online instructor of all time, Laura Gibbs, makes the same mistake that Onuf does – since the situation in my department is excellent, everything will eventually work out great elsewhere too:

“[T]here is no reason at all to suppose that an online instructor is more or less likely to care to know their students (see quote below). I care. I care a lot, in fact. Meanwhile, I also know that plenty of face to face faculty don’t care. Which is their choice, a personal choice – not technological determinism.

I guess Jonathan is assuming that online faculty have higher teaching loads (“too many students”), but that’s not necessarily the case at all. For example: big lecture classes that take place face to face. I would contend that there is more distance in a big lecture hall than in any of my online classes. I teach appx. 100 students total per semester… not too many for me; it works fine. And I do care to know them – plus, teaching online, I have far more opportunities to get to know them than I ever did in a classroom.”

The problem, of course, is not with online or face-to-face faculty per se. The problem is the circumstances in which they teach. I am against giant, impersonal face-to-face classes. I am against giant, impersonal online classes. The question becomes how do we make it possible to assure that all students, online or face-to-face, learn under the best circumstances possible? Safety first!

Am I arguing that all administrators are inherently bad? Of course not. But some are, and if you don’t take steps to prevent abuse you’re practically giving the bad ones an invitation to do mischief. This goes for all aspects of academic life. However, if the tool you’re using to do a job is more dangerous than another, more safety measures are very much in order.

With great power comes great responsibility. And like it or not, the superprofessors of this world have a lot more power than the rest of us do. All we can do is remind them of their responsibility to use it wisely.





You do not need an LMS in order to teach with technology.

28 06 2014

“…Silicon Valley’s reigning assumption: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be “freed up” to do something “more valuable.” Completely absent from this view is any sense of what it actually means to be a human being.”

- Nick Carr, “An android dreams of automation,” Rough Type, June 26, 2014.

Who dropped the ball? It certainly wasn’t me. Was it you?

When I stopped taking graduate classes in 1993, people had barely heard of the Internet, let alone any kind of learning management system (or LMS). I had never even taken a class that used the Internet, let alone an LMS. I didn’t start teaching with any kind of technology until I got some professional development when I was working at what is now Missouri State University. My department chairman there mandated that all syllabi must be posted online (a really good idea that I still don’t think most universities bother to do). As a result, I learned what I think was then called Microsoft FrontPage and haven’t handed out a piece of paper in class since.

I also remember attending the first time my current employer offered BlackBoard classes. I thought it was mostly bells and whistles and refused to use it. In the same way I hated Moby Dick the first time I read it (actually, I still hate Moby Dick, but that’s the subject for a whole different blog), I gave Blackboard another chance a couple of years later. I came to the same conclusion and haven’t touched any LMS since

Yet while I was learning what technologies for teaching I like and eschewing others, a sea change was taking place in higher education. Learning Management Systems were quickly (if you call fifteen years or so quickly) going from a novelty to being the norm. At first, I was simply annoyed because my students kept asking me what their grade was during the semester (since they could always see it for most other classes in the LMS) and I had to keep telling them that I hadn’t done the calculations yet. Over time, however, LMSs have become a way for administrations and edtech companies to control the manner in which professors teach. Yes, you can still pick your content – I think – but many of the other decisions that professors used to be able to make by themselves (whether to tell students how they’re doing at every point during the semester, for example*) have been determined by the capabilities of learning management systems to process and present information.

What I’m wondering now is how this happened. I wasn’t really paying attention at the time and I haven’t done the research into this little piece of edtech history, but I do have some theories that I was hoping people better informed than I am might kick around:

1. It was the online instructors. They did it!!!

OK, maybe not the online instructors, but certainly online instruction is possibly to blame here. Imagine it’s the late-1990s. All these universities want to go into online instruction on the cheap so like IBM with the Windows, they outsource the operating system to companies that are dying to serve them. The universities themselves are so pleased with the ability to monitor classroom interactions, that they then go and encourage every other faculty member to use the LMS too. Pretty soon, scads of us can’t live without one.

2. Faculty were sold a bill of goods with respect to convenience.

Why would anybody first pick up the LMS habit? Time would be a great incentive. I still remember how amazed I was when I first learned Excel so that I could compute my grades on them. It literally saved me at least eight hours each semester at exactly the time of year when my time was most important! Gradebooks in any LMS would do the same thing. Such conveniences may have convinced lots of people to invite a guest to the party who decided to monetize the punchbowl. Pretty soon, who has time to learn any other system?

3. It started with the adjunct faculty.

The same way that adjunct faculty can’t pick their textbooks in many cases, perhaps they were the natural beta testers for learning management systems – particularly in online settings where the regular tenure track faculty was likely not paying attention. Once they became hooked on doing things through an intermediary, regular faculty joined along because that seemed like the right thing to do. I don’t know exactly how LMS contracts are structured, but imagine them all being on campus-wide licensing systems. Even if it costs more the more users you have, the later users are always cheaper than the earlier users and pretty soon the whole thing would have just snowballed.

Whether it’s all these things or none of these things, there’s still time to remember three very important points and begin to act upon them:

1. You do not need an LMS in order to teach.
2. You do not need an LMS in order to teach with technology.
3. The selection of educational technologies you can use outside the LMS are only getting better.**

If we forget these simple facts, we will all likely become victims of the reigning assumption of Silicon Valley sooner or later once the LMS takes over most of our jobs entirely. At least this will free us up to spend more time looking for better-paying work, while our students suffer from a chronic substandard education which just happens to be delivered with a few elements based upon the use of modern technology.

* I strongly suspect that those of you who actually teach with LMSs can come up with a better example than that one. Please do so and explain it in the comments below.

** Who remembers what happened to AOL? I certainly do.





A world without us.

25 06 2014

By now, you should have “met” John Kuhlman. My correspondence with him began after my office hours piece, and has only gotten more interesting over time. One of the things he’s suggested to me that I particularly like is the idea of measuring teaching effectiveness by the responses that professors receive from their students. I imagine this not simply as a question of counting the number of comments you get on your teaching evaluations, but looking anywhere (e-mails, LinkedIn requests, whatever) for active engagement with your pedagogy, both good or bad.

You say I’m a dreamer? Of course I am. The corporate types that have taken over most of higher ed will never let qualitative measures happen because this flies against everything that modern management philosophy represents. As Chris Newfield explains in his epic contextualization of the Christensen/Lepore grudge match:

In contrast to professional authority, which is grounded in expertise and expert communities, managerial authority flows from its ties to owners and is formally independent of expertise. Management obviously needs to be competent, but competence seems no longer to require either substantive expertise with the firm’s products or meaningful contact with employees. The absence of contact with and substantive knowledge of core activities, in managerial culture, function as an operational strength. In universities, faculty administrators lose effectiveness when they are seen as too close to the faculty to make tough decisions. In the well-known story that Prof. Lepore retold, the head of the University of Virginia’s Board of Visitors decided to fire the university president on the grounds that she would not push online tech innovation with the speed recommended by an admired Wall Street Journal article. The Christensen model does not favor university managers who understand what happens in the classroom and who bring students and faculty into the strategy process. For employees and customers are exactly the people who want to sustain and improve what they already have, which in disruptive capitalism is a loser’s game.

What universities already have is us – by which I mean the professoriate. Applying Christensen’s value-neutral philosophy of “progress” to higher education inevitably means getting rid of faculty entirely, no matter what kind of meaningful responses they can illicit from their students.

Perhaps a very brief history is in order. Starting around 1970, universities began to use adjunct faculty to spare themselves the cost of hiring tenure track faculty who demand crazy things like health benefits and academic freedom. People not in those positions mostly did not object to this development because they did not see that it affected them. Where are we now? As the anonymous genius behind “100 Reasons NOT to Go To Graduate School” noted in their first post in a really long time:

There are now nearly 3.5 million Americans with doctorates (see Reason 55) but only 1.3 million postsecondary teaching jobs (see Reason 29), and the oversupply of PhDs is becoming a crisis in the rest of the world as well. A Norwegian newspaper has called it the academic epidemic. Legions of graduate students spend years of their lives preparing to compete for jobs that are few in number and promise little opportunity for advancement. The academic world is one in which ambition is rewarded with disappointment millions of times over.

The real “disruption” in higher ed is the entirely understandable willingness of people at the wrong end of that numerical divide to undercut the wages and prerogatives of the faculty on the other in order to scrape out a living. Technology which allows anybody with an internet connection to teach anywhere makes this process ridiculously easy, while academic management types use the tuition checks that keep flowing in to hire more managers.

The obvious next step in this process is to cut out faculty entirely. Since you can’t survive without teaching entirely, then you unbundle it so much that almost nobody can make a living doing it. Here‘s Katherine Moos from Chronicle Vitae last year:

Private and public universities are pouring millions of dollars into MOOCs. Where will the savings be realized? An organization (in this case, a university) won’t invest in a new technology unless there’s a long-term labor cost advantage to doing so—hence the term “labor-saving technology.” Remember Adam Smith’s pin factory? Now picture one professor video-lecturing, another taking attendance, and yet a third grading assignments (perhaps from another country). Rather than producing original research and unique pedagogy, professors could quickly join the ranks of workers providing highly specialized and deskilled services.

I would suggest that this kind of de-skilling is so drastic that the word “faculty” is no longer appropriate. Indeed, just try to imagine someone receiving the kind of letters that John Kuhlman received simply by taking attendance really, really well. And while students might be listening to content from the most qualified content providers in the world, the whole idea of splintering the learning process into a million pieces is obviously an idea that only a manager (rather than an educator) could love.

Of course, I don’t want to blame this whole thing on MOOCs (as tempting as that might have been a couple of years ago). MOOCs, like any other educational tool, can be used responsibly or irresponsibly. The problem here (as it is with so much of higher education) is the dictatorial, top-down management philosophy that makes their misuse not just possible, but likely. If the practitioners of this management-centered higher ed philosophy can imagine a world without us, perhaps we can begin to imagine a world with a lot fewer of them – a world in which faculty prerogatives over the educational process can be re-established.





“I am unable to comprehend the changes that are taking place in higher education.”

24 06 2014

Wanna know why I’m luckier than you are? I’ve read John Kuhlman’s letters from his students and you almost certainly have not. Radiolab, Starbucks and a ninety-one-year-old retired economics professor are in my latest for Chronicle Vitae.





“What goes up, must come down.”

20 06 2014

My knowledge of the contemporary steel industry is a little rusty, but it’s better than you might think. About a decade ago, my poorly-read dissertation was enough to get me invited on as a consultant to two NEH seminars conducted at the Western Reserve Historical Society in Cleveland, OH. Those gigs included tours of the huge Mittal plant there, where we discussed the difference between how steel is made now and how it was made back in the day.

The big difference is that nobody manufactures steel at all anymore in America. The price of scrap steel is so cheap (and his been so for decades now) that just about every blast furnace in America* has been taken down and replaced with electric arc furnaces that melt old steel so that it can be recycled into new products. Just last week, I got a chance to tour the arc furnace building here at Evraz in Pueblo for the first time. They have “recipes” which they use to turn various kinds of scrap into first-rate finished products. I don’t think anybody bothers to call plants like this one “minimills” anymore. In reality, they are old plants (the one in Pueblo dates from 1906) that have been retrofitted for new market realities.

All this serves as background for my reading of the primal scream of an interview that Clayton Christensen just gave Business Week in response to the Jill Lepore takedown of disruptive innovation in the New Yorker. Here’s the part about the steel industry:

[Disruptive innovation] is not a theory about survivability. I’d ask [Lepore] to go see an integrated steel company operated by U.S. Steel. Seriously. And come back with data on, does U.S. Steel make rebar anymore? No, they’ve been taken out of rebar. Do the integrated steel companies like U.S. Steel make rail for the railroads? No. Do they make rod and angle iron, Jill? No. Do they make structural steel I-beams and H-beams that you use to make the massive skyscrapers downtown, does U.S. Steel make those beams? Come on, Jill, tell me! No!

So what do they make? Steel sheet at the high end of the market. The fact is that they make steel sheet at the high end of the market, but have been driven out everywhere else. This is a process, not an event.

For all I know, Christensen may be right about what U.S. Steel makes (although there must be a hell of a market for high-end sheet steel if they can dominate the entire steel industry from that niche). I can, however, tell you this: Evraz makes rail, and apparently they’re the market leader. And they do this from a plant that looks like Hell from the outside, but is obviously incredibly high-tech from within. In other words, this particular facility (under a different owners) caught up in its battle against the slash-and-burn “innovators” of the steel industry. And they’re still a union shop!!!

But you don’t even have to innovate in order to keep up. I heard a story in Cleveland that still haunts my dreams. Apparently, there was once a rolling mill there that the owners sold to a Chinese firm. That Chinese firm took that mill, disassembled it piece by piece, reassembled it in China and starting making steel there again. They couldn’t make money from it in America because of labor costs and environmental regulations, but the old technology still worked fine in China. What’s standing at that site now? A Walmart, of course.

Unlike Lepore, who did the research across industries and got it fact-checked, I’m not saying that Christensen is full of it. What I am saying is that the situation in the steel industry is clearly much more complicated than he has suggested. Unfortunately, complicated stories don’t sell books to businessmen. Complicated stories don’t get you expensive speaking gigs. Complicated stories don’t make you the darling of Silicon Valley. Simple ones do.

“What goes up, must come down,” is probably too simple to explain what’s happening to Christensen at this moment, but it’s the best I can do at the moment so I’m sticking to it. If anybody out there would like to revise my revisionism, please be my guest. After all, that’s how scholarship is supposed to work.

* Nobody was sure whether U.S. Steel still made its own steel in Gary, IN anymore. That’s a harder question to answer than you might think. Mittal maintained the capacity to make its own new steel, but when I was there they told me that they almost never used it.





Disruption disrupted.

17 06 2014

I never took a course in the history of technology. My dissertation (and very poorly read first book) were about labor relations in the American steel industry. While overdosing on industry trade journals, I quickly realized that how steelworkers labored depended upon how steel was made and that the best way to distinguish what I was writing from the many studies that had come before was to get the technological details right.

This proved to be a terrible strategy. While I’m quite sure that I did indeed get the technological details right, the people who read my manuscript never recognized this since they had all read or written books that got them wrong or never covered them at all. The worst comment I ever got (which, of course, I remember to this day) was “Rees knows nothing about the technology of the steel industry.” I begged to differ, but what could I do about it? Nothing.

I wrote Refrigeration Nation because I enjoyed reading old trade journals to get the details right and because I wanted to examine the technology of an industry that nobody else had written about. Surprisingly, when I picked my second book project that description included the refrigeration industry. Actually, refrigeration is not one technology, but many: ice harvesting equipment, large scale industrial refrigerating machines, electric household refrigerators and others. If you read the book (and I certainly hope you do), you’ll see I spill the most ink writing about the transitions between one technology and another.

These transitions can be painfully slow. Ice harvesting didn’t die until around World War I. The ice man still delivered machine-made ice door-to-door in New York City during the 1950s. Even today, you can still buy what is generally known as “artisan ice” for people who really want their drinks to be special. Perhaps this explains why I’ve always been so suspicious of Clayton Christensen’s theory of “disruptive innovation.” Everything I’ve ever studied that you’d expect to disappear in the blink of an eye when in competition with better technology always managed to hold on for decades.

By now, you’ve probably already read Jill Lepore’s absolutely devastating takedown of disruptive innovation in what I presume is this week’s New Yorker. [It appears rather late in my neck of Colorado. Thank goodness this one is outside the paywall!] If you still haven’t let’s just say that Lepore is unimpressed by the work of her Harvard colleague:

Disruptive innovation as a theory of change is meant to serve both as a chronicle of the past (this has happened) and as a model for the future (it will keep happening). The strength of a prediction made from a model depends on the quality of the historical evidence and on the reliability of the methods used to gather and interpret it. Historical analysis proceeds from certain conditions regarding proof. None of these conditions have been met.

And remember, there’s plenty of excellent evidence for the pace of technological change in countless American industries. You’ve never read an Alfred Chandler takedown because Chandler actually consulted this stuff. Christensen apparently not so much.

Since I don’t have a team of fact checkers at my disposal, I’m just going to concentrate here on the industry Lepore covers that I know best: steel. Here’s Lepore:

In his discussion of the steel industry, in which he argues that established companies were disrupted by the technology of minimilling (melting down scrap metal to make cheaper, lower-quality sheet metal), Christensen writes that U.S. Steel, founded in 1901, lowered the cost of steel production from “nine labor-hours per ton of steel produced in 1980 to just under three hours per ton in 1991,” which he attributes to the company’s “ferociously attacking the size of its workforce, paring it from more than 93,000 in 1980 to fewer than 23,000 in 1991,” in order to point out that even this accomplishment could not stop the coming disruption. Christensen tends to ignore factors that don’t support his theory. Factors having effects on both production and profitability that Christensen does not mention are that, between 1986 and 1987, twenty-two thousand workers at U.S. Steel did not go to work, as part of a labor action, and that U.S. Steel’s workers are unionized and have been for generations, while minimill manufacturers, with their newer workforces, are generally non-union. Christensen’s logic here seems to be that the industry’s labor arrangements can have played no role in U.S. Steel’s struggles—and are not even worth mentioning—because U.S. Steel’s struggles must be a function of its having failed to build minimills. U.S. Steel’s struggles have been and remain grave, but its failure is by no means a matter of historical record. Today, the largest U.S. producer of steel is—U.S. Steel.

Two other factors that Lepore doesn’t mention (which makes me think that Christensen didn’t either) are environmental regulation and foreign competition – the second being the more important of those two to the overall fate of the industry. The success of minimills also required a huge decrease in the price of scrap steel. What these other factors suggest is that any hard and fast rule of technological change will inevitably fall victim to the unpredictability of people. My old advisor used to call this the social system of production, and practically the entire subfield of the history of technology is predicated on this notion rather than Christensen’s brand of technological determinism

For example, if I remember right, Chandler’s last book (I get the titles mixed up) is about the various quirks in the path of industrialization across international borders. In my work, the most important factor determining the speed at which one refrigerating technology transitions to another is its reception by consumers and amazingly enough lots of refrigeration consumers just hate “progress.” Just to namecheck a great book that I happen to be reading right now, in Seeing Underground, Eric Nystrom describes the effect of political factors – especially lawsuits – on the quality of mine maps. In Butte, Montana, at least, the more lawsuits there were the more precious metals they eventually found.

Of course, my interest in Christensen comes from his pronouncements about higher education. Lepore does very little with them in her article, but that shouldn’t stop anyone from applying the same logic that I just did here. There is no scientific law of the jungle that fates universities to go entirely online or die off. If people value direct human contact and the educational advantages it brings, they should be willing to pay – or force their governments to pay – for universities to teach in face-to-face settings. Like I wrote in Inside Higher Education a really long time ago now, all this talk about inevitability is just a way to shut down discussion so that the educational traits that we once valued will be abandoned more easily.

The great service that Lepore has performed is to metaphorically take the fight over those values to the source of the attacks against them. Like MacArthur at Inchon, she has landed behind enemy lines and will hopefully force the enemy to pull back and defend ideological territory that they thought they had already conquered. Those of us currently at risk of becoming victims of creative destruction can only hope she succeeds.








Follow

Get every new post delivered to your Inbox.

Join 2,281 other followers

%d bloggers like this: