When testing new teaching methods on students, proceed with care

Jennifer Ebbeler

Editor’s Note: This column is one in a series by associate classics professor Jennifer Ebbeler on the changing nature of higher education at UT-Austin and other institutions. Look for Prof. Ebbeler’s column in the Opinion section of this paper every other Wednesday.

Last week Sebastian Thrun, founder of the massive open online course provider Udacity and, according to Max Chafkin, the godfather of free online education, announced that he was abandoning the hallowed halls of the university for the more lucrative world of business training. 

Thrun told Fast Company he was delighted to be back in the world of paying customers. The primary reason for what he termed a “pivot” was a disastrous pilot program involving Udacity and San Jose State. The educational experiment was announced in January 2013 — less than a year ago — to much fanfare and a healthy dose of hype. The partnership was supposed to give students from a range of backgrounds, including high school students, access to university-level courses. 

But the spring semester pilot of the program failed to consider that such a varied student population would require substantial on-the-ground support, and the results of the pilot were disappointing. When the summer session pilot failed to produce substantial improvements, San Jose State and Udacity “put the experiment on hold.” 

Now, with Thrun’s abrupt departure to the private sector, the experiment is on permanent hold.

News of Thrun’s decision to leave the academic game was greeted with substantial outrage. Particularly infuriating was Thrun’s tone-deaf comment, also to Fast Company, in which he said that the participants in the pilot project “were students from difficult neighborhoods, without good access to computers and with all kinds of challenges in their lives … [For them] this medium is not a good fit.”

Thrun’s experiment didn’t yield the results he wanted, the press for Udacity was not good and so Thrun turned his back. He made no attempt to figure out why the experiment didn’t work or to figure out what needed to be done to provide a genuine educational opportunity for  this student group. Creating a quality course that supports student learning requires experimentation, but it is this kind of experimentation on students that needs to be stopped.

In the fall of 2012, I significantly redesigned a course I had taught as a traditional lecture course. In many ways, I turned the class upside down, and in learning how to teach the class anew, I made a lot of missteps.

There were some vocal, unhappy students who didn’t like feeling like they were part of a pedagogical experiment. They had a right to feel that way. But the responses of that group of students — and especially the complaints of the most vocal group of students — were crucial to figuring out what needed to be changed in the course’s structure. 

Any instructor who uses Blackboard or Canvas already collects a significant amount of data about student learning habits. Other class tools, such as Echo360, add to this mass of data. In analyzing data about my students’ learning habits, I am not interested in what individual students are doing so much as what the class as a whole is doing: Did they watch the assigned lecture? When did they watch it? 

These questions influence how I teach, and so long as these electronic footprints are being used in the aggregate and toward the improvement of a current or future course, I would argue that the use of information about students garnered through teaching tools is fair. At the same time, I find myself wishing that both instructors and students were better informed about what data is being collected, how it can be used, and what responsibilities faculty have to ensure that students are informed and their privacy is protected. Normally, anyone who is experimenting on human subjects has his research plan approved by an Institutional Review Board. The IRB has a responsibility to protect research subjects from abuse and to ensure that all research subjects are properly informed and that they have given consent. There is an interesting loophole in this process, however. If the research is part of the course design and clearly connected to defined learning outcomes, it does not require IRB approval. For example, if an instructor asks students to fill out a reflective survey about their exam preparation, with the intention (and solid research support) of helping students identify and correct bad study strategies, it does not fall under the purview of an IRB. If I — a classics professor — were to poll my students on dating habits without going through the IRB, however, I’d be in trouble. Students need to be informed of their rights — including their right to say no to providing data that is clearly not connected to their learning in a course. 

At this crucial moment in the history of higher education, it is important that a university support pedagogical experimentation. But it is equally important that faculty who are engaging in such experiments do so responsibly, thoughtfully, and always with an eye toward using all feedback to improve student learning. The absence of this feedback loop in the San Jose-Udacity partnership is what makes it irresponsible and potentially unethical. Pedagogical improvement does not happen without some experimentation, but we must always remember that our students are, first and foremost, our students, not experimental subjects.

Ebbeler is an associate classics professor from Claremont, Calif. Follow Ebbeler on twitter @jenebbeler.