We interrupt my merciless book flogging for a very important guest post. People send me stuff (MOOC-related and otherwise) all the time these days. If it’s good, I always ask if it can be a guest post here at MOLB…and my offer invariably gets rejected. The same thing happens when I try to get some of my best commentators *cough* Mazel *cough* to contribute guest posts.
That changes today. Thomas Castillo got his Ph.D. in American Labor History from the University of Maryland in 2011 and is on the job market. His topic is that crazy Northwestern adjunct teaching study which means that it couldn’t be more timely:
When I first read that non-tenure instructors were found to be better instructors than tenured ones in David Figlio, Morton Schapiro, and Kevin Soter’s “working paper,” “Are Tenure Track Professors Better Teachers” my training in immigration, labor, and African American history switched the warning lights in my head: be weary of such broad sweeping categories explaining complex social phenomena.(1) My most immediate reaction was that this ran counter to the idea of the MOOC turn.
I have to admit my head has been spinning with the racing images that are orbiting speedily by in the education world. With the rise of the new robust celebrity roster of professors—what some have donned super professors—this new study suggests that non-tenure track professors may actually be the missionaries in the trenches since they help best the weakest students. The study’s limitations and narrowness are reflected in the fact that it only looks at one elite institution (Northwestern University) and it relies solely on questionable quantitative data to comment on teacher effectiveness—measures that are actually not that significant even in the terms of the study’s own parameters.
The superficial reaction to this “working paper” (an article that has not been peer reviewed) has come on three fronts. At least one tenured faculty member I had contact with thought this study was quite revealing. Up to this point, he tended to hold a low opinion of non-tenured faculty. In fairness to him, he has tended to have a low opinion of tenured faculty who he has deemed to be less than dedicated to their teaching responsibilities. However, I believe his gut feeling unfortunately tends to reverberate in academia. Many tenured faculty often either scoff at non-tenured instructors or, equally bad, remain oblivious to the plight of insecurity and instability that these non-tenured professors (contingent labor) have to live with: including lack of health insurance, reduced pay, large teaching loads, little support from their home institutions, and general social and intellectual marginalization.(2) The best advice progressive tenured faculty usually offer is condescending: they give a weak nod to the right of contingent labor to organize wondering all along why non-tenured professors have not done so yet. In the process, they reveal their cluelessness about the obstacles preventing contingent faculty unionization.
Another tenured faculty friend of mine was not surprised but for the opposite reasons. She felt that all the other responsibilities on her plate (publishing demands, committee work, university service, etc.) disallow her from being as good a teacher as she could be. She is very dedicated to teaching and has been deflated with the reality of a large teaching load (4-4 of mostly first year history courses) and working with students who need more academic support and guidance. As you can imagine, she does not teach at Northwestern University. She is committed to great teaching but is frustrated by the challenge arising from the demands of academia and the institutional reality of a large teaching load, a situation simply not confronted by tenured faculty at Northwestern—and likely non-tenured faculty as well. In addition, she has always held a high opinion of non-tenured faculty because her personal learning experience with non-tenured faculty and the fact of what she has witnessed: many of her friends are contingent labor and are very good at what they do. The situation she confronts highlights that increasingly the only difference between tenured and non-tenured faculty in many institutions across the country are in the areas of job security, benefits, and expectations for publication and service.
These two reactions are from instructors who care about teaching and look always to improve its effectiveness. The third reaction is to take this study as further proof that tenure as an institution is problematic and needs to end/be reformed or that the condition of contingent work is just fine and actually is a good thing for students especially the weakest ones. While the articles reporting on this “working paper” have generally been “neutral,” they nonetheless report its general findings which in effect serve to fuel a wider political academic controversy and offer only an employer, organizational perspective in the sensitive student-faculty-management relationship.
The study’s findings and its coverage in the media supports and buffers a neoliberal argument against tenure or, more positively, for flexible work regimes.(3) As the authors of the study write, “the growing practice of hiring a combination of research intensive tenure track faculty members and teaching-intensive lecturers may be an efficient and educationally positive solution to a research university’s multi-tasking problem.” (p. 16, my emphasis). It is not difficult to see how such a conclusion will support attacks on tenure and justify an increase in contingent faculty and further splinter the professorial ranks into at least two castes.(4)
While likely conceived with the best intentions, executed with great integrity, and completed with high hopes, the paper’s many assumptions embed too many problematic evidentiary issues. The authors show little concern for complex social phenomena, namely the changing contexts confronted by individual instructors and students. The absence of qualitative data hides deep weaknesses in the methodology of the study. Their paper is a classic case of the old saying that there are lies, damned lies, and statistics.
For starters, the authors do not use any qualitative evidence to evaluate teacher effectiveness. They use meta-data, such as courses registered for, registrar transcript records of all first semester freshmen between 2001 and 2008, information on faculty status, GPA, SAT scores, and vague and unclear measures used by admissions to rate accepted students (“a five point academic indicator scale” which is never described), to make their analysis about teacher effectiveness. Three huge assumptions they make that are qualitative in nature are student preparedness (based on the academic indicators and SAT scores), non-tenure teacher’s inspirational powers to motivate students to take a second class in a subject (9.3% more chance in subjects outside one’s stated intended major as listed in the admission application) (5), and a single non-tenure teacher’s power to shape a student’s academic performance in a subsequent course in the subject (a little over one-tenth of a grade point better in the next course).
One may begin to see the problem unfolding here. It may seem like a great idea to organize these data into a useful study. Certainly, access to such meta-data could inspire a group of researchers to look for neat linear causality particularly if motivated to find “a solution for a research university’s multitasking problem.” The attractiveness of the authoritative nature of the meta-data was clearly too much temptation to not try to tackle a topical and current problem, even if the data is useless when treated in a vacuum.
Let me state this as clearly and directly as possible: we do not learn anything about varied student experience outside generic aggregate meta-data (GPA) and this data does not account for specific semester context and other qualitative differences among students and/or instructors. Readers will likely interpret what they want from the study. It still will not correct the flawed logic. The model is very monolithic and traps students and teachers into rigid theoretical boxes. The authors bulldoze over relevant data for no apparent reason except what seems to be statistical convenience and/or lack of vital qualitative information.
That apparently weaker students can get stronger over time is not a surprising finding even if they happened to take their first class by non-tenured faculty. Indeed, as educators it should be expected. However, a student’s education is part of a collective effort. A student’s growth and development intellectually and emotionally are a function of the entire college experience not merely that of any one teacher. If you accept this premise then the entire study by Figlio, Schapiro, and Soter crumbles. It will be nearly impossible to identify any cause and effect relationship without a deeper and richer social study.
What is surprising is that the authors have the audacity to imagine such a passionless form of inspiration; that they can isolate a student’s growth and development in such linear and isolated fashion. The enormity of the data apparently infused the authors with lots of chutzpah. It really is irrelevant whether or not a student took a course with a tenure or non-track instructor in their first semester—that is, if one is not weighed down by preconceived notions about an instructor’s ability and skill being somehow related to arbitrary categories of job security and status.
It is a misfortune that Figlio, Schapiro, and Soter found it worthwhile to craft a study looking for differences among these apparently different populations. Each instructor I am assuming earned a Ph.D. which means that all instructors have been trained, in theory, equally the same in their respective fields. Each student is unique and each semester context likely varied for each instructor and student. The very assumption of the study should be offensive to each and every instructor and it should be to each and every student as well.
The authors assume much about the significance of grades and the next class experience. Students are treated as monolithic on this front. We have no sense what may be causing a reduction or improvement in grades and how that may vary by subject, time, course load balance, and personal subjective factors disconnected from instructor input. We are expected to accept their assumption that we only isolate teacher impact in either preparation or inspiration. That is a very large assumption. I for one simply do not hold this belief. Figlio, Schapiro, and Soter need us to have faith in this assumption in order for their study to have any credibility.
We are to assume that grades are a direct reflection of learning. When discussed in terms of outcomes and performance, learning (education) seemingly becomes quantifiable. Teaching in this context is not understood as a craft or art. Learning can be accounted by test results or GPA. As far as that is true, that is fine. But what is not allowed in the data are less easily quantifiable data or information not accessed such as teacher evaluations, performance in related subject fields, student growth and development, and performance (even on their terms) before or beyond that next class experience.
That final point hits a major flaw in their data: it isolates analysis of student performance/growth to the next class in the subject without following students in other contexts or even attempting to explain the semester context of the next class experience. The data therefore is too narrow to lead to sound conclusions on such a large subject as professor teaching effectiveness. What lessons, skills, and knowledge were actually learned and which of these are affecting student performance?
The model used in the study to evaluate student performance controls for two factors: student ability as defined by SAT and other unclear indicators and instructor standing (tenure vs. non-tenure). What is not considered are differences in students that likely have no record: student motivation, student identity development and degree it affects performance in class (how curriculum and/or major interest is connected to a student’s individual identity; sexual orientation; intellectual growth, etc.), and external factors (family, socioeconomic, and other personal issues). One may add to this the issue of student attendance, which has been shown to have a major influence in student learning outcomes but is not usually measured; in fact many universities disallow attendance to be graded so instructors often rely on class participation, something that is measured poorly if at all in large classrooms but evaluated more effectively in smaller venues.
The authors do not gauge the level of difficulty of a given semester by course load or individual course qualities or the weight of a student’s course load as determined by semester. They fail to follow variation over time: when does the next class experience occur? Do students improve, decline, or stay the same before and after the next class experience either in the specific subject or overall as a student in subsequent semesters. As any instructor and academic advisor will highlight, students’ academic performances (and major interests) change and change frequently depending on context.
It is clear to this reader that the authors simply were looking for ways to justify an existing system of a two-tier professoriate and perhaps even to expand its scale. They have little concern for the working conditions or professional development of professors. I would extend this lack of connection and empathy to the overall experience of students. They are driven by Northwestern’s self-interest of maintaining the prestige of the University regardless of the costs or real effects of the labor regime they have in place.
It would have been more scholarly and intellectually honest if the authors had produced a study with robust qualitative research, richer in detail, and more subtle and sensitive in its analysis. This would have led to far different conclusions and recommendations. Instead, the obfuscating nature of the statistics highlights the several poor choices made by the authors.
Given that the study addressed what amounts to an efficiency question (“multitasking problem”), it started with a blinding bias. This bias was articulated in the broad categories of tenure and non-tenure track professors. The authors failed to note that the non-tenured professors at Northwestern (over 80%) are full time with equal benefits as tenured track faculty, as the education blogger and psychologist Cedar Riener was told recently by David Figlio.(6) Riener’s point that job security and benefits are separate ethical questions and different than whether or how much students are learning is a crucial one. I do not take Riener’s response as an agreement of the study’s findings but rather an intervention to prevent confusion of two separate questions: workers’ rights and student learning. He does not analyze whether or not the study proves anything about learning and I think for good reason.
If we cut through the motivation for greater university efficiencies, what we get is an inadequate study that offers no real insights. In addition, it may very well be impossible to design a study to isolate such broad categories as tenure and non-tenure, weak and strong students if we accept the idea that student learning occurs as a collective experience. Studies designed on the micro level, however, may offer useful information about effective pedagogies, proper working conditions for instructors, good learning environments, and other relevant insights into student education and labor questions shaping higher education. Unfortunately, a perspective starting at the macro level with a clear emphasis on managerial prerogative and institutional “multitasking problems,” and lacking vital qualitative data, will not teach us anything on student learning or the condition of being a contingent laborer.
(1) David J. Figlio, Morton O. Schapiro, and Kevin Soter, “Are Tenure Track Professors Better Teachers?” NBER Working Paper Series, National Bureau of Economic Research, Sept 2013. Figlio is the Orrington Lunt Professor of Education and Social Policy and of Economics at Northwestern University’s Institute for Policy Research; Morton O. Schapiro is President of Northwestern University and an economist who has written on higher education; and Soter is said to be a consultant (see New York Times article cited below).
(2) James Monks, “The General Earnings of Contingent Faculty in Higher Education,” Journal of Labor Research, vol. 28, no. 3 (2007): 487-501; Monks, “Who are the Part-Time Faculty?” AAUP Report, July-August 2009; Gabriel Arana, “Higher Education Takes a Hit,” The Nation, April 13, 2009; Pablo Eisenberg, “The ‘Untouchables’ of American Higher Education,” Huffington Post, June 29, 2010; Claire Goldstene, “The Politics of Contingent Academic Labor,” Thought and Action, Fall 2012; Goldstene, “The Emergent Academic Proletariat and Its Shortchanged Students,” Dissent, August 14, 2013; Coalition of the Academic Workforce, “A Portrait of Part-Time Faculty Members,” June 2012; Kay Steiger, “The Pink Collar Workforce of Academia,” The Nation, July 11, 2013.
(3) Scott Jascik, “Adjunct Advantage,” Inside Higher Ed, Sept 9, 2013; Tamar Lewin, “Study Sees Benefits in Courses with Non-Tenured Faculty,” New York Times, Sept 9, 2013; Khadeeja Safdar, “Students Learn Better from Professors Outside Tenure System,” Wall Street Journal, Sept 11, 2013.
(5) This statistic is at best perplexing. The authors state the percentage is actually 7.3%–that is, that a non-tenured faculty member will increase the likelihood to take another class in the subject of a student’s stated intended major (see p. 9). The 9.3% represents courses outside the student’s stated intended major. So how are we to interpret the disparity? Students are less likely to take a class in their intended major when they take their first class with a non-tenured faculty professor? This seems to suggest that students are being pushed out of their intended majors for some unknown reasons. Following the logic of the authors, one may argue that non-tenured instructors are negatively affecting students’ dreams and aspirations (their stated intended majors) by making them less interesting and thus are pushing students away. This, of course, would be an absurd finding.
(6) Cedar Riener, “Student Learning and Labor Policies, follow up,” Cedar’s Digest blog, Sept 25, 2013; Riener, “Student Learning Doesn’t Depend on How Much Teachers are Paid,” The Atlantic, Sept 24, 2013.