Article: Mayer et al. (2009)

Reference: Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34(1), 51-57.

Summary: In this article, Richard Mayer and his collaborators, nine in all, describe the results of an experiment comparing the use of clickers to non-clicker alternatives.  A large enrollment educational psychology course, taken mostly by junior and senior psychology majors, was taught one year in a “traditional” method, without the use of in-class questioning or clickers.  The next year, the same course (with very similar students) was taught using in-class questioning facilitated by clickers.  In the third year, in-class questions were used, but instead of having students respond using clickers, students wrote their responses down on paper quizzes, passed those papers in to the instructor, then indicated their responses to the questions with a show of hands.

Differences among the three courses were kept to a minimum.  The same instructor taught all three courses, and the lecture materials were repeated, as well, with the exception of the additional questions added to the clicker and no-clicker groups.  Reading assignments and exam questions were identical, as well.  Having the students respond to questions in writing in the no-clicker class meant that their initial responses to a question were largely made independently of their peers, just as in the clicker class.  (The answers they signified during the shows of hands were, on the other hand, not necessarily independent.)

There were some differences, however.  The in-class questions in the clicker and no-clicker groups were graded (1 point for answering incorrectly, 2 points for answering correctly), which meant grade incentives were a possible motivator in those two groups.  There was no parallel grade incentive in the “control” group.  Also, in the no-clicker class, the paper quizzes were typically administered at the end of a class session for logistic reasons (distributing and collecting the quizzes took time), whereas in the clicker class, questions were asked at various points during class.

The authors’ findings were certainly interesting.  When they compared midterm and exam performance across the three courses, they found that the clicker class performed significantly better on the exams, averaging 75.1 points out of a possible 90.  The no-clicker class averaged 72.3, and the control group averaged 72.2.  (The difference here was statistically significant with p=.003.)  So the clicker class ended up with an average grade in the course 1/3 of a letter grade higher than the other two classes, a B instead of a B-.  And the paper quizzes plus hand-raising had “no discernible difference on student learning outcomes.”

Even more interesting was the following.  The clicker class performed almost identically to the other two classes on exam questions that were similar to questions asked (via clickers or paper quizzes) in class.  However, on exam questions that were dissimilar to in-class questions, the clicker class performed significantly better (50.2 vs. 47.9 and 48.2, p=.002).

The authors conclude from these data that the logistical difficulty of implementing the paper quizzes (distributing the quizzes, collecting the quizzes, and so on) interfered with any benefit gained from questioning students in this manner.  They also note that doing the questioning at the end of a class session might reduce the impact of the questioning on the students’ learning.  The use of clickers made questioning students “seamless” for the instructor and allowed the instructor to test and provide feedback to students closer in time to the initial learning experience.

The authors also note that some of the components of active learning–“(a) paying more attention to the lecture in anticipation of having to answer questions, (b) mentally organizing and integrating learned knowledge in order to answer questions, and (c) developing metacognitive skills for gauging how well they understood the lecture material”–might serve to explain why the clicker class outperformed the other two classes on exam questions dissimilar to in-class questions.

Comments: These results are fairly persuasive.  The authors did a good job of controlling for potentially confounding variables, and the use of three groups–clickers, no clickers, and control–meant that they could isolate the effect of the clickers from the effect of having students respond to questions during class.  Their conclusion–that clickers make questioning easier for both instructors and students and so allow questioning to have more impact–makes sense to me.

Another possible explanation for the higher learning gains in the clicker class is that the students in the clicker class were able to see the display of results of the clicker questions, whereas the students in the no-clicker class had to rely on a show of hands to see where their peers stood on a question.  Since it’s been shown that the hand-raising method leads to inaccurate representations of student understanding (see, for instance, Stowell and Nelson, 2007), it could be that the more accurate reporting of student responses to questions allowed by the classroom response system led to students taking the process more seriously in one way or another.

It’s also worth noting that after questions were asked and answered by students in both the clicker and no-clicker class, not too much happened.  The instructor would state the correct answer, have a student volunteer share reasons for the correct answer, then share his own reasons for the correct answer.  There wasn’t much in the way of agile teaching (doing something different in class in response to the results of a clicker question) or peer instruction (having students discuss questions with each other prior to answering).  There wasn’t much discussion of incorrect answers, apparently.  All of these processes have potential pedagogical benefits.  Had they been employed, the different in learning outcomes between the clicker class and the other two classes might have been even greater.

I should also point out that the article doesn’t clearly state the instructor’s experience teaching with clickers, although it seems a safe bet that the instructor was new to using clickers.  Instructor experience is another important variable, as is the nature and difficulty of the questions used.  A few sample questions were included in the article, but it would have been helpful to know how difficult the students found these questions.  Did most students answer them correctly?  Did a lot of students answer them incorrectly?

Leave a Reply

Your email address will not be published. Required fields are marked *