My Evil Plan to Save Course Evaluations (#lilly10)

More from the the 30th annual Lilly Conference on College Teaching at Miami University in Ohio back in November. I’m pretty sure this will be my last post on the conference!

Student Evaluation of Teaching: Views of Students and Faculty

Ted Wagenaar and Sara Butler, Miami University

The presenters are part of a group studying the use and perceptions of course evaluations at Miami University. The group’s initial goal was to develop a proposal for a new evaluation instrument that would be structured more like an analytic rubric. These are rubrics for which levels of quality within various categories are explicitly stated. A course evaluation instrument structured in this way would replace the ambiguous 1-to-5 Likert scale on a question like “Rate the effectiveness of your instructor in communicating with the class” with  something more concrete, like this:

  1. Basic concepts were regularly explained in ways that were very hard to follow.
  2. The instructor explained some basic concepts clearly, but many explanations were hard to follow.
  3. The instructor explained basic concepts clearly, but was difficult to follow when explaining most advanced concepts.
  4. The instructor clearly explained basic concepts and some advanced concepts, but explanations of other advanced concepts were hard to follow.
  5. The instructor clearly explained basic concepts and almost all advanced concepts.

That’s a set of descriptors I just made up off the top of my head, but you can see that by having such descriptors, it’s much more likely that the teacher and all the students would interpret a 2 out of 5 the same way. Without descriptors (that is, in the case of all course evaluations I’ve ever seen), it’s quite possible for various students and the instructor interpret a 2 out of 5 differently.

The Miami University group conducted surveys and focus groups with students and faculty members in order to explore their current perceptions of course evaluations and their roles in the university. They quickly found that opinions were all over the map and it would be very, very difficult to create a rubric-based tool that was embraced by all. Take the above question as an example. I interpreted “communicating with the class” as “explaining concepts to the class,” but one could read it as “communicating with the class around expectations and class logistics” or even “understanding and answering question questions.” That’s why this particular rating question is an ambiguous one, but it’s also why I can see the Miami University process producing very long evaluation forms!

A summary of the quantitative data generated by the presenters is available. They shared excerpts from the qualitative data during the session. All these data spurred a very rich discussion among the session attendees about course evaluations, as you might imagine. My note-taking and tweeting couldn’t keep up with this fast-flowing conversation, so I can’t summarize it here. (Aside: I found it interesting that almost everyone still at the conference who was active on Twitter during the conference was at this session!)

However, I will point out one of the links shared: Rachael Barlow‘s study on the timing of evaluations posted to RateMyProfessors.com. Rachael notes that students post comments on RMP whenever they like, not during the week or so that official course evaluations are usually made available online near the end of the semester. Knowing when students elect to post to RMP might shed some light on the best times to make official course evaluations available. Rachel found that for fall courses, most students post comments to RMP in mid-to-late October, with some posting in January instead. While I can’t see faculty getting on board with course evaluations in October, this might help make the case for having students complete them in January instead of the usual December timing, helping increase response rates for online evaluations.

All the talk about rubric-based evaluation tools gave me an idea, one that would probably get me labeled as a troublemaker by some. If students and faculty don’t have a shared understanding of what a 2 or a 4 means on a course evaluation rating question, why not write your own rubric for the rating questions on the evaluation instrument used at your school and share it with your students?

You wouldn’t have to change the standard evaluation form your school uses (which would likely take a serious amount of organizational change), you’d just have to supplement it with what is essentially an explanation of how you define the scores 1 through 5 for each of the questions on the form. You’d have a better sense of how to interpret your scores. It might be harder to compare your scores with your colleagues’ scores or school averages, but how useful are those comparisons anyway without shared understandings of what the various ratings mean?

Here’s where the troublemaking comes in: You could put your rubric in your tenure file to help your tenure committee interpret your student ratings. And you could share your rubric with others in your department or school and encourage them to use it in a similar manner. If they all used your rubric, you’d lay the groundwork for making changes to the official form to make it more meaningful. If they all used different rubrics, well, that might motivate some changes, too, although you might get in trouble for instigating some confusion!

What do you think about my idea for improving course evaluations? Sensible? Crazy talk? Something in between?

Image: “Planning Session,” WorldIslandInfo, Flickr (CC).

Post title inspired by Five Iron Frenzy‘s song “My Evil Plan to Save the World.”

Leave a Reply

Your email address will not be published. Required fields are marked *