Transforming course evaluation surveys from ‘good’ to ‘great’

Joseph Martin, Columnist

On March 29, UT published the new course evaluation survey results from the Fall 2022 semester — the first wave since the provost announced a new system last April. Despite improvements to the system, such as increased accessibility and survey response rates, registering students are only able to see the broad results of previous course evaluations. 

For the sake of transparency and the benefit of its students, UT should implement more qualitative questions on the new course evaluation system results. 

While UT’s course evaluation system exists in part to inform registering students about courses using previous student evaluations, this system lacks holistic descriptors of course quality. Students may find themselves consulting third-party websites and basing their class selection on less informed, outlying views. 


One prompt on the current CES says: “The instructional techniques kept me engaged in learning,” and allows students to rate the prompt on a five-option scale. Responses can range from “strongly disagree” to “strongly agree.” While this does give us a glimpse of students’ perceptions, it does little to provide substantive information about the teaching methodology. 

Other CES questions are often similarly ambiguous and multifaceted, making it difficult for students to gauge whether classes are best suited for them. Economics junior Matheo Hayek explained he would value more specific information on the CES report regarding professors’ accessibility and teaching styles. 

“I (prefer) professors that are easier to reach out to, that don’t mind actually talking with students,” Hayek said. “On top of that, I do like when professors kind of build on the material in class instead of just going through slides, or (when they) help (students’ learning) in different ways.”

Expanding the scope of the CES, though, is no simple feat. 

“Bias can enter into any kind of evaluation,” said Julie Schell, UT’s assistant vice provost of academic technology and the director of the Office of Academic Technology. “The comments that are going to be the most helpful are the ones where someone can actually do something with it, but comments on someone’s personal appearance or whether they’re cool or not — (we) try to limit that. For the purpose of improving teaching, having both (qualitative and quantitative feedback) is really helpful.”

Refining the CES requires a two-fold approach. 

Firstly, students must be better incentivized to thoughtfully and accurately complete course surveys. This could be accomplished through a test-like structure where students log in and complete the survey synchronously. Survey questions would only allow students to move on after a requisite time period has passed and would automatically lock with each successive response to prevent blind answers. Secondly, qualitative questions must be formulated specifically enough to garner actionable feedback rather than unproductive responses. 

If the CES feedback is comprehensive enough to inform staff about potential curriculum and faculty improvements, then it should serve as a guide for students to choose future classes. 

While surveys should still be vetted for inappropriate or incoherent responses, preserving students’ voices by publishing their CES responses would create a more tangible impact. As with any major decision, academic or financial, students should know exactly what they’re committing themselves to. 

Martin is an advertising and radio-television-film junior from Rockwall, TX.