Article: Yourstone, Kraye, and Albaum (2008)

Reference: Yourstone, S. A., Kraye, H. S., & Albaum, G. (2008). Classroom questioning with immediate electronic response: Do clickers improve learning? Decision Sciences Journal of Innovative Education, 6(1), 75-88.

Summary: This article presents the results of a study of four sections of an undergraduate introductory operations management course.  Each of two instructors taught two sections, one with clickers and one without.  In each of the four sections, quizzes were administered regularly during the last portion of many class sessions.  In the clicker sections, these quizzes were administered using clickers, and questions that gave students problems were discussed immediately.  In the non-clicker sections, the quizzes were administered on paper, graded after class, and returned to the students in subsequent class sessions.  Students were then allowed to ask questions about quiz items they answered incorrectly.

Students in the clicker sections scored about four points higher on their final exams in the course than students in the sections that did not use clickers, a statistically significant difference.  This difference was independent of the two instructors, in spite of the fact that one instructor was new to using clickers and the other had used them in two previous semesters.  (Both instructors were experienced teaching the course, however, and had team-taught the course together in the past.)  The authors conclude from their results that student learning in the clicker sections was enhanced by the immediate feedback made possible by the use of clickers.

Comments: These are certainly interesting results, similar in some ways to those of Reay, Li, and Bao (2008).  In that study, the use of clickers resulted in a ten point improvement on final exam scores.  However, the use of clickers in the Reay, Li, and Bao study was more extensive and was combined with a particular pedagogical approach.  As I noted in my blog post on that study, it’s tough to say what role the use of clickers played in the improvement of student learning relative to other aspects of the pedagogical approach used.

In this study, the difference in final exam scores between clicker and non-clicker sections wasn’t as remarkable (four points instead of ten), but it’s a little easier in this case to attribute that difference to the use of clickers.  The study authors attribute the greater student learning outcomes to the fact that the clickers allowed the instructors to provide more timely feedback to students on their quiz performance.  There are, however, a couple of other factors that could be at play here.

One is that in the clicker sections, students not only found out how well they did on the quizzes, they also found out how well their peers did.  By sharing the results of each quiz clicker question with the class, all students were made aware of the aggregate class response to each question.  It’s possible, for instance, that when students learned that large numbers of their peers missed a question they then paid more attention to the explanation offered by their instructor.  This would result in greater student learning, too.

Another, perhaps even more important factor is that in the clicker sections, the instructors were able to practice “agile teaching” by discussing with the class quiz questions that were widely missed by the students.  (It’s not clear from the article how these questions were discussed with students, only that there was some kind of explanation or class discussion.)  In the non-clicker sections, instructors apparently did not analyze the quiz results to identify the questions most missed by students in order to discuss those questions with the students.  By allowing agile teaching, the clickers allowed the instructors to use class time to address common misunderstandings.  This more efficient use of class time might have led to greater student learning, too.

It’s unclear which of these three factors–more immediate feedback, question-by-question quiz results being made public to the class, and agile teaching–led to the improved final exam scores.  In fact, all three likely had an impact.  Regardless, each of these factors was enabled by the use of a classroom response system, and none of these three factors was at play in the non-clicker section.  As a result, the four-point improvement in final exam scores can be directly attributed to the use of clickers in this case, making this article a strong piece of evidence in favor of the use of clickers.

Leave a Reply

Your email address will not be published. Required fields are marked *