“You tell anyone and we’ll kill you.”

25 03 2013

Many thanks to the Edububble guy for finding the above Saturday Night Live clip. It’s been at the back of mind ever since I started blogging about online education, but I couldn’t remember enough of the details to find it myself. Hit play and you’ll see that it’s about Winston University, “located just 35 miles west of Boulder, Colorado” (which would be on top of a large mountain). Winston University takes parents’ tuition money, splits it fifty-fifty with students and only requires them to come back on Visiting Day, April 12th. The school’s motto: “You tell anyone and we’ll kill you.”

This is the development that inspired that clip’s appearance on Edububble:

It’s official: Colleges can now award federal student aid based on measured “competencies,” not just credit hours.

In a letter sent to colleges on Tuesday, the U.S. Education Department told them they may apply to provide federal student aid to students enrolled in “competency-based” programs and spelled out a process for doing so.

One way to measure these “competencies?” MOOCs, of course. While I’m mostly in agreement with the Edububble critique here, what I do object to is the title for the post, “Ka-Ching! Professors Can Skip Class Too Now!” Like any of this was the faculty’s idea. With shared governance in the state it’s in these days, there’s no way that Winston University would ever share the take with whatever faculty that might show up on Visiting Day. Nowadays, they’d just offer all their fake classes online and get rid of Visiting Day entirely.

Of course, Winston University is just the logic extension of the corporatization of higher education. While money for nothing is every university president/aspiring CEO’s dream, faculty serve as a check on these kinds of abuses. As Katherine D. Harris writes about MOOCs at San Jose State:

Do we want students to simply get through our curriculum? Or do we want them to learn?

The more contact you have with your professor, the better that professor will be able to do their job, which is to make sure that everyone in that classroom is actually learning.

But what about people who don’t care whether they’re learning or not? Aren’t plenty of people taking MOOCs because they just want access to information? Yes they are, but are their narrow interests worth letting greedy administrators destroy higher education entirely? Stephen Downes (no friend of this blog) seems perfectly content to let them do this in the short term:

MOOC success, in other words, is not individual success. We each have our own motivations for participating in a MOOC, and our own rewards, which may be more or less satisfied. But MOOC success emerges as a consequence of individual experiences. It is not a combination or a sum of those experiences – taking a poll won’t tell us about them – but rather a result of how those experiences combined or meshed together.

This may not reflect what institutional funders want to hear. But my thinking and hope is that over th long term MOOCs will be self-sustaining, able to draw participants who can see the value of a MOOC for what it is, without needing to support narrow and specific commercial or personal learning objectives.

This is what we call in American football “moving the goalposts.” If the idea of universal college education doesn’t work for everyone, then claim that that wasn’t the original goal. Or maybe that wasn’t Downes’ original goal, but there are still plenty of universities all around the world who are chomping at the bit to treat “competencies” gained through MOOCs as the exact same thing as having attended college because they are desperate to shed faculty labor costs (even though the faculty they want to shed are what make a college education valuable in the first place). Turning a blind eye to this tragedy, the MOOC Suicide Squad prefers living in a dreamworld in which students can all teach themselves everything that they need to know (and grade each other too!).

As a parent with a daughter in college right now, I can assure you that the vast majority of us will not pay thousands upon thousands of dollars to have their children’s “competencies” tested, nor will employers hire “graduates” whose sole means of demonstrating those supposed skills is a standardized test or a MOOC completion certificate, even if it’s from Harvard lite. At least the people who ran Winston University were smart enough to keep their scam secret. This scam is going to be run in front of everyone, students and parents alike, because MOOC enthusiasts have no sense of shame.





“Has he lost his mind?”

6 08 2012

So I signed up for a MOOC. Seriously. A History of the World since 1300, taught by Jeremy Adelman from a certain university located in my hometown of Princeton, New Jersey.

Why would I of all people do such a thing? Well, I’ve had something of a complex about my overspecialization in American history since my first teaching job at Whitman College. Unlike Wisconsin, which had Americanists coming out of its ears, Americanists were in the minority at Whitman so the old Europeanists teased me for having such a limited knowledge base. I’ve rectified that somewhat through independent reading, but I could definitely stand to learn more specific factual knowledge from outside my country of specialty.

Then I watched this TED talk by Coursera’s Daphne Koller and got a little excited. I had never seen so detailed an explanation of the mechanics of MOOCs, and it seems as if they’ve gone to great lengths to help students learn the kind of factual knowledge that I’m missing when it comes to world history.

Have I lost my mind? Nope. Am I pulling a Whittaker Chambers or a David Horowitz on the subject of MOOCs? Nope. As anyone who’s ever watched a TED video knows, there are parts of every such speech that make you want to take a hammer to your computer screen (and I’ll get to that one for me in this speech in just a second). However, as I’m on sabbatical for this coming this semester, learning world history seems like a good use for some of my extra time.* In fact, there’s a place on my annual performance review for extra education which I’ve never had occasion to mark before. I’m absolutely going to put this down.

So what’s the problem? Well, for starters the course has only one text and even that’s only recommended. Is there a history class anywhere in America (let alone Princeton) which has no required reading? Seriously, I have a question for all the education geniuses out there who want me to flip my classroom: When are students going to do the reading I assign them? After all, history is a literary art, not a trivia game.

Now here’s the part of that Daphne Koller video that came close to inspiring me to violence (my transcription):

“Well, of course, we cannot yet grade the range of work one needs for all courses. Specifically, what’s lacking is the kind of critical thinking work that is so essential in such disciplines as the humanities, social sciences, business and others. So we tried to convince, for example, some of our humanities faculty that multiple choice was not such a bad strategy. That didn’t go over really well.

[Audience chuckles]

So we had to come up with a different solution. And the solution we ended up using is peer grading. It turns out that previous studies show, like this one by Sadler and Good, that peer grading is a surprisingly effective strategy for providing reproducable grades. It was tried only in small classes, but there it showed, for example, that these student-assigned grades on the Y-axis are actually very well-coordinated with the teacher assigned grades on the X-axis. What’s even more surprising, self-grades, where students grade there own work critically – so long as you incentivize them properly so that they can’t give themselves a perfect score – are actually even better-correlated with the teacher grades. So this is an effective strategy that can be used for grading at scale and is also a useful learning strategy for the students because they actually learn from the experience.

I’ve covered this precise subject before, but this sounds even worse to me now than it did then. When testing becomes the be all and end all of American education at all levels, we act like it’s OK to care only about the math and not about actual learning.

How are students ever going to learn anything about critical thinking in any subject without good, thoughtful comments? The students are incentivized to get done with their peer grading as soon as possible because it’s not their grade. When I grade, my salary incentivizes me to actually explain to my students how to do better next time. As further incentive, when my comments actually help, it makes grading their papers easier in the future. That kind of attention will never scale up. Period.

I only worry if anyone will care. I guess I don’t care for purposes of what I want out of this class, but presumably I know something about critical thinking already.**

* No lazy professor jokes, please. As anyone who’s ever been on sabbatical knows, it’s not a work-free period. It’s a period when you do different kinds of work. I’ve been telling people that I’ll be a professional writer until January. I have a new research project to work on, but of course I’m going to write about taking this course too.

** If I can’t ace this course I’m going to be so ashamed.





What should students know after your history course is over?

19 10 2010

This morning at the Historical Society blog, Randall Stephens asks a really common question, “What do undergraduates know about history?” The reason to ask that should be obvious. If students don’t understand the historical background that your lecture is predicated upon it will likely go in one ear and out the other. My fear, however, is that even if students do understand the historical background that my lectures are predicated upon it will still go in one ear and out the other because too many of them simply do not have the skills they need to master the art of historical thinking,

It’s no coincidence that I used that phrase as Randall brought up Sam Wineburg. Here is the quote from Wineburg before the one he uses:

Let me give you a quote: “Surely a grade of 33 out of 100 of the most basic facts of American history is not a grade of which any high school can be proud.” Did this come from the 1987 National Assessment of Educational Progress report by Diane Ravitch and Chester E. Finn? Did it come from the 1976 bicentennial test that Bernard Bailyn did with the New York Times or the one that Allan Nevins did in 1942? No. This is a quote from a study done in Texas high schools by J. Carleton Bell and D.P. McCollum, published in the 1917 Journal of Educational Psychology. It was the first large-scale factual test of American history that we have in American education. Think about who went to high school in Texas in 1915 and 1916; only 10% of the population, the elite, and yet they scored horribly on this test.

In other words, the problem of students lacking specific factual knowledge has been around for a long time. Perhaps the results are more comical these days than they used to be. Nevertheless, we can’t control what students know before they enter our classrooms. We can, however, control what they know after they’re done.

I think there are two ways to address knowledge deficiencies among students. The first would be to cover everything important that you think they’re missing. The problem with that strategy should be obvious. How do you know what they don’t know? Are you going to give them a standardized test at the beginning of class? Let’s supposed you did and that was a perfect indicator of student knowledge (which is an assumption I could spend an entirely different post attacking). How are you going to figure out what they know about the topics NOT covered on the test? Do you really want to spend that much time prioritizing specific factual knowledge?

The other strategy to address knowledge deficiencies would be to ask a different question. My choice would be “What should students know after your history course is over?” More importantly, my answer to that question isn’t a list of facts. It’s a list of skills:

1. How to think like a historian.
2. How to express that kind of critical thinking in a written format.
3. How to read critically.
4. How best to conceive of history in general (rather than memorize specific historical facts).

Ideally, more than a few historical facts will slip in while this teaching of skills is going on. After all, you have to teach your students some facts or else they won’t have any building blocks for their arguments. The difference is that they get to pick the facts. The information they learn that is most useful to their lives will likely be the facts they use in answering my questions, and will hopefully then be most likely to stick.

Thinking and teaching this way has been a gigantic change for me. As a longtime history geek, I’ve always had a very good memory for all sorts of little details that served me well on history tests. I’ve reached the point in my career where I can lecture of a single-page PowerPoint slide list if I have to (even though I prefer more notes to help me through those more than occasional moments when I lose my train of thought). I’ve always prided myself on covering every aspect of American history: social, cultural, political and economic.

Then I ran straight smack into reality. Most students thought what I thought was interesting was actually extremely dull. I had to decide between covering everything badly or covering fewer things well. More importantly, all this talk about assessment got me thinking about the outcomes I wanted from my survey classes, and strangely enough there wasn’t a single specific fact on my list.

Imagine your average freshman survey student. How much are they going to remember about the history you covered in your course ten, twenty, maybe fifty years from now? Unless you’re a historian, facts are fleeting (and they always have been). Skills are forever.





I hate standardized tests.

18 12 2009

Mark Kleiman (via Megan McCardle) reminds me that it’s almost time to undo one of the stupidest acts of the Bush Administration:

One of the striking features about NCLB is the primitive evaluation mechanism it employs. It’s pure defect-finding: measuring the percentages of kids of different types who fail to achieve some standard, as measured by standardized tests. Henry Ford would recognize it. W. Edwards Deming would be appalled by it.

Statistical quality assurance depends on sampling, not census inspection; on paying attention to the entire range of outcomes, not just whether a given outcome meets or fails to meet some standard; and on process. And it is continuous and interactive rather than purely retrospective. In Deming’s world, the purpose of quality assurance is to feed back information about processes and their outcomes to operators so the processes can be changed in real time.

One of the reason Honda and Toyota ate General Motors’s lunch is that the Japanese car companies adopted statistical quality assurance while Detroit was still inspecting every part coming off the assembly line to see whether it was within tolerance. Why are we using those same outdated principles to manage the much more complicated problem of teaching children to read, write, and reckon?

We test every student so that we can pin the failure rates of students on their teachers or their schools. Sample “students for high-quality, expensive testing” as we do with the National Assessment of Educational Progress tests and blaming teachers becomes impossible. Not only would that destroy whatever Republican support there was for the bill the first time around, it would give those students who take the test even less incentive to do their best on the test because they know it won’t affect two things (schools and individual teachers) about which they presumably care.

Seeing Kleiman’s faith in management principles of any kind in school systems suggests to me that he doesn’t understand the first thing that any thoughtful classroom teacher will tell you if you ask: Learning is not a commodity. You can’t price it, you can’t sell it, you can’t transfer it and, most of all, you can’t measure it objectively. Period.

I was listening to someone from the Department of Ed last week in DC say that the Department believes it has two functions: Shining a light on good educational practices and giving away money. That gave me hope for the future as the best thing to do to improve education in this country is to attract higher-quality teachers by paying them more money and then keeping the government of the way.





Why I hate “educrats,” part 3461.

29 06 2009

From Inside Higher Ed:

Online learning has definite advantages over face-to-face instruction when it comes to teaching and learning, according to a new meta-analysis released Friday by the U.S. Department of Education.

The study found that students who took all or part of their instruction online performed better, on average, than those taking the same course through face-to-face instruction. Further, those who took “blended” courses — those that combine elements of online learning and face-to-face instruction — appeared to do best of all. That finding could be significant as many colleges report that blended instruction is among the fastest-growing types of enrollment.

The Education Department examined all kinds of instruction, and found that the number of valid analyses of elementary and secondary education was too small to have much confidence in the results. But the positive results appeared consistent (and statistically significant) for all types of higher education, undergraduate and graduate, across a range of disciplines, the study said.

Horsehockey. How do I know? The study’s design:

A meta-analysis is one that takes all of the existing studies and looks at them for patterns and conclusions that can be drawn from the accumulation of evidence.

On the topic of online learning, there is a steady stream of studies, but many of them focus on limited issues or lack control groups. The Education Department report said that it had identified more than 1,000 empirical studies of online learning that were published from 1996 through July 2008. For its conclusions, however, the Education Department considered only a small number (51) of independent studies that met strict criteria. They had to contrast an online teaching experience to a face-to-face situation, measure student learning outcomes, use a “rigorous research design,” and provide adequate information to calculate the differences.

The key word here is “calculate.” The only way to make that possible is by measuring learning by filling in tiny bubbles on multiple choice tests. This almost has to be true given the age of some of those studies.

Multiple choice tests can’t measure the kind of learning outcomes that you can only get through direct engagement with the professor. You know, stuff like an actual conversation where you can respond in real time and make sure everybody is actually paying attention.

What’s worse are the implications of this study for labor. I can here it now, “Let’s outsource higher education to India! After all, they’re cheaper over there and the fact that everything has to be done online is actually to our advantage.

It’s times like this when I fear for our future as a country.





Sam Wineburg v. the Teaching American History program

20 04 2009

As somebody who’s been involved in the Teaching American History Grant program for a long time now, I had heard about Sam Wineburg’s speech before the TAH conference at the OAH Meeting in Seattle last month but I didn’t realize precisely what he was proposing until I read Rick Shenkman summarize it on HNN this morning.

I certainly agree with this critique of the program:

And how do we generally measure the effect of the TAH programs on teachers? By having them take multiple choice questions found in an AP history exam. Wineburg was incredulous about this. “In other words, we are paying millions of federal dollars per fiscal year to assure that school teachers possess the level of factual knowledge that we expect of bright seventeen year olds.”

It’s actually worse than that. Many programs only give the same multiple-choice questions twice; once at the beginning of the course of study and another at the end. Assuming the questions match the course of study (which may be a big assumption if they’re using AP questions) even a ten-year old should be able to do better the second time simply if they’re paying attention.

Wineburg is generally right that assessing the success of TAH programs is a huge problem. Anyone involved in the program for any length of time already knows that. However, suggesting that existing assessment mechanisms stink does not necessarily mean that the program itself has failed to teach teachers (or even students) anything.

Looking at Wineburg’s proposed solutions, it strikes me that he wants to throw the baby out with the bathwater.

1. “Set aside 20 percent of TAH fiscal year funds for competitive grants … to independent researchers … to assess and evaluate projects in experimental and quasi-experimental ways.” This is needed because one of the gravest threats to the integrity of the evaluation process is the cozy relationship that often grows up between teachers and evaluators, he said.

There goes 20 percent of the money that might have gone to teachers to something other than teaching teachers history. Since huge percentages of grant money already go to evaluators, why not make them do it with that existing pot?

2. “For every $20 million in awards, [we should] set aside $1 million for new research and the development and testing of new measures to assess historical understanding and knowledge.”

Again, why can’t this be done within the cost structure of the existing program?

3. “We need to stop testing teachers with multiple choice items.”

Agreed, but since school districts will have to come up with new assessment tools anyway, why not make them do so with their existing grant money?

4. While communities love to invite marquee historians to do their summer workshops these are often not the right historians to be involved in TAH. “We need to engage those historians who are working on the scholarship of teaching and learning … those people who are trying to create college classrooms where our students are thinking and working beyond the use of historical facts. These are the historians we must keenly engage in our projects so we can begin to articulate the problems between elementary and secondary and tertiary education.”

The purpose of TAH program is to improve teacher content knowledge. While some historians certainly do a better job at this than others, what makes Wineburg think that people who think the same way he does will do this job any better than the ones who are doing so now? If he wants money to improve schools of education he should find someone in Congress who agrees with him and try to start a new grant program. I would certainly support that effort.

5. “I dare anyone in this audience to dispute the following claim: We will not change history teaching by continuing to ignore how new teachers are trained. It’s that simple. We need innovative approaches for combining the strengths of university history departments and schools of education to create the kinds of courses and practice teaching assignments that put new teachers into the classroom already possessing deep knowledge and appropriate skill. We need new ways of thinking about alternative certification for history teachers and ways to deliver teacher training on-line. By ignoring how we socialize new teachers into the profession, we delude ourselves. More than any other issue, this one is the elephant in the TAH living room.”

My problem with how history and social studies teachers are trained is that they spend too much time in education classes and too little time learning the content that they’ll teach. Robert Byrd created the TAH program precisely in order to fix that situation. Sure, we cover how to teach the new material as well as what to teach, but just because program participants have not proven learning effectiveness to Wineburg’s satisfaction does not necessarily mean that the entire focus of the program should be changed.

In our grants, we’ve moved towards new models of assessment involving document-based questions for both teachers in the program and for their students. Other higher-order assessment models exist like this one which might be adapted into a TAH context.

Using the word “boondoggle” at this juncture to describe a program that has already done so much non-quantifiable good just doesn’t strike me as particularly helpful.





If you have to do assessment, this seems like a pretty good way to do it.

26 03 2009

My department has been deep in the throes of discussing assessment at regularly-scheduled department for some weeks now. As a historian, I used to contend that assessment is evil. As this recent post from RYS puts it so well:

After we developed the learning outcomes, we were told that our grades did not measure whether or not the students had achieved the outcomes. So then we all had to go to special training to learn how to evaluate our students’ outcome-based learning on a grid that lists each student, each learning outcome, and the activity we used to determine whether the student met the outcome. We not-so-jokingly called this The Matrix training.

We were told that we had to do all this because otherwise we would lose our accreditation. We redid every single syllabus in our college in less than a month and then created The Matrix in a couple of weeks. The result is a steaming pile of busy work at the end of each semester to satisfy those who believe that more paperwork somehow equates with quality.

I still think there’s something to this, but if somebody is going to make you assess whether your students are learning anything you might as well try to do it in the least evil way possible. I think David Scobey of Bates College, writing at Inside Higher Ed (via AHA Today) may have something here:

What, then, would a robust assessment practice look like? It would embody the qualities that typify humanities learning itself. It would be iterative: gathering and evaluating portfolios of material from the whole arc of the student’s career. It would be exploratory and integrative: asking students to include in those portfolios materials in which they are not only learning about the humanities in their course of study, but also using it in their civic, ethical, vocational, and personal development.

It would be autobiographical: requiring students to narrate and thematize that development, to frame their portfolios with their own, small versions of Obama’s memoir. And it would be reflective: calling on them at threshold-moments to plan and take stock, to evaluate their successes and failures, and (equally important) to make explicit what they count as success and failure in their education. This last point is crucial: humanities assessment (like humanities learning) is intrinsically dialogical and open-ended. Indeed the sine qua non of a successful humanities education may be precisely that it equips students to discuss and contest the question, “Has my education been a success?” with their teachers and their peers.

While the whole autobiographical Obama thing seems pretty goofy, I think what he means is simply that students should be able to describe the process by which they reached they’re conclusions. Who can be against that?

Granted, you may think that only someone teaching at a college as small as Bates could come up with such a scheme, but Scobey mentions online versions of this already up and running at bigger schools like Portland State. The whole discussion reminds me of the last set of articles I read on re-writing the history curriculum to emphasize skills rather than facts. Stick with factual knowledge as a student learning outcome and there’s no way you can avoid those stupid multiple-choice tests.

As Scobey recognizes, this is not the kind of assessment that makes edu-crats happy:

I am mindful that the model I am sketching is bound to give the assessment reformers heartburn. Portfolios framed (like the pages of the Talmud) by autobiographies, reflection statements, and contestatory dialogue; student work assembled in narratives of meaning-making, rather than being measured as evidence of mastery — this is surely not what the Spellings Commission meant when it called on academics to take assessment seriously. For the reformers want an efficient, transparent, portable metric of effective teaching and learning: a tool that can quantify the value-added of a college education, of skills learned and knowledge deployed, in comparative rankings.

So if someone is going to make you do assessment anyway you might as well do it in a way that suits you and your discipline. At least with this way you’ll have a leg to stand on when you can’t avoid it any longer.