Evaluating Teaching with Clickers

Teaching with clickers provides instructors with a wealth of information they can use to learn about the effectiveness of their own teaching.  Finding out that half your students don’t understand a topic (as evidenced by wrong answers to a clicker question) minutes after you’ve explained it can be disappointing, but it’s better to know your students are confused than to assume they’re following along.  More direct clicker questions (like, “How well are you following my lecture right now? Very, somewhat, a little, or not at all?”) can also provide formative feedback on one’s teaching.

But what about using clickers for more summative evaluation of one’s teaching?  Might clickers be used in place of the handwritten or online end-of-semester course evaluations?  I haven’t heard of many schools doing so, but I did speak with someone involved in such an effort back at EDUCAUSE in the fall of 2008:

I spoke with Danny Sohier of Université Laval in Québec after the session.  His school is using clickers to conduct end-of-semester course evaluations during class.  They found that online course evaluations resulted in low response rates, a problem I’ve heard about from many institutions.  They now use clickers to collect student responses to multiple-choice evaluation questions during class in some courses, inviting students to respond to open-ended questions online outside of class.  Danny indicated that this arrangement is working pretty well.

These seems sensible to me.  The advantage of handwritten course evaluations completed during class is that response rates are fairly high, since often most of the students enrolled in a course show up on the day evaluations are completed.  The disadvantage is that handwritten evaluations take more work to analyze.  Using clickers during class to ask these kinds of questions keeps response rates high and yields data that are easy to use.

The advantage of online course evaluations is that students can take their time and compose thoughtful and lengthy replies to open-ended questions about one’s course or teaching.  The disadvantage is that relatively few students do so!  The system described above allows motivated students to submit thoughtful responses to open-ended questions after class, while hearing from all (or almost all) the students in a course on some useful multiple-choice questions during class.

This topic has been on my mind since Nira Hativa of Tel Aviv University posted an inquiry to the POD Network listserv about using clickers for formal teaching evaluation.  Kevin Owens of Turning Technologies outlined one way to do so:

  1. Evaluation questions are entered ahead of time into a PowerPoint presentation (via our TurningPoint software which is free) and saved onto the memory stick or network drive.
  2. Instructor leaves room
  3. Selected facilitator (staff member, student worker, etc.) loads interactive PowerPoint presentation and distribute clickers to students
  4. Selected facilitator administrates the questions giving 5 to 10 seconds on each question to allow students time to submit responses
  5. When finished, facilitator saves data onto memory stick or network drive for future reporting.
  6. Saved data can produce up to 30 automated reports within our TurningPoint software or can produce raw data available for export into existing reporting tools on your campus.

Mark Scarbecz of the University of Tennessee College of Dentistry pointed to a poster he presented about UT Dentistry’s experiences using clickers for evaluating teaching.  His conclusions?

Response rate and acceptance of the ARS for course evaluation were greater than for a web-based system. The ARS was effective and efficient for data collection. Random selection of keypads provided anonymity. ARS software had multiple formats for data reporting. Limitations of the ARS are the following: 1) a small question set reduces the length of evaluation sessions and student boredom, but also information collection; 2) student conversations during sessions may bias responses; 3) the ARS provided no mechanism for open-ended feedback; and 4) development/ presentation of question sessions and dissemination of data are timeconsuming and labor-intensive.

Nira Hativa replied to the listserv to note that her situation has a particular challenge.  The courses she’s evaluating rotate instructors every two or three class sessions, so waiting until the end of the semester to collection feedback on those instructors isn’t practical.  She discussed this challenge with Mike Theall of Youngstown State University, and it appears that she doesn’t have the staff power to send a facilitator in these classes every two or three class sessions to conduct this evaluation, the students might not provide honest feedback if the instructor is the one administering the evaluations, and some of her instructors might object to one of the students in the class proctoring the evaluations (as has often been done in the past with paper-based evaluations).

If the student proctor option is off the table, I’m not sure Nira’s problem has a solution.  However, I throw the question to you now: Any ideas for helping Nira?  And do you have any experience with or ideas about using clickers for course evaluations?

Leave a Reply

Your email address will not be published. Required fields are marked *