Reference: Crossgrove, K., & Curran, K. L. (2008). Using clickers in nonmajors- and majors-level biology courses: Student opinion, learning, and long-term retention of course material. CBE-Life Sciences Education, 7(1), 146-154.
Summary: The authors report on their study of the impact of clickers in two courses, an introductory biology course for non-majors and a genetics course taken by sophomore biology majors. Both authors came to these courses with experience using active learning techniques and had experience with clickers prior to this study.
The authors surveyed their students and according to survey responses, students were generally very positive about the use of clickers. The non-majors in the introduction course were more positive than the majors on some points, including the usefulness of clickers in helping students score higher on exams.
Students’ average performance on final exams during the semester prior to the use of clickers was compared to average performance during the semesters in which clickers were used. There was no statistically significant difference in performance, although, as the authors note, the cohort of students changed each semester.
Within each course that used clickers, however, student performance on exam questions covering topics that were treated in class with clicker questions was statistically significantly better than student performance on exam questions covering topics that did not involve clicker questions. This was true for all question types–factual recall, conceptual understanding, application, and analysis.
The authors also asked for student volunteers to take tests on the course material four months after the course ended. Although only a few students did so (14 or 15 for each course), the non-major students retained information taught via clickers at a significantly higher rate than they retained information not taught via clickers (dropping from 88% to 83% for clicker material, 92% to 75% for non-clicker material). Major students did a poor job of retaining all material, whether or not it was taught via clickers (dropping from 86% to 60% for clicker material, 87% to 61% for non-clicker material). It’s worth noting, however, that most of the clicker-material questions for the majors were “harder,” application-level questions, which may have contributed to the poor showing.
Comments: To explain the finding that the major students weren’t as positive about the use of clickers as the non-major students, the authors cite the different class sizes of the two courses (large in the case of non-majors, smaller in the case of majors). Other possibilities that occurred to me include the following:
- Different exam formats, perhaps because of a greater match between in-class clicker questions and exam questions in the introductory course,
- Different active learning techniques, perhaps because clicker questions didn’t enhance the problem-based learning approach used in the genetics course to the degree that they enhance the peer instruction approach used in the introductory course
- Affective domain issues, perhaps because the mostly pre-med genetics students were more sensitive to grading issues than the non-major students, or
- Differences in expectations for learning, perhaps because the genetics students expected to spend time memorizing information, not interacting with information via clicker questions.
I would be interested in seeing more research (by the authors or by others) into the use of clickers with upper-level students exploring these issues.
I found it an interesting result that the students’ performance on exam questions related to topics taught with clicker questions was better than their performance on other questions and that this result didn’t depend on the kind of exam question. However, it would be useful to know what kinds of clicker questions were asked during these course. Are there certain kinds of clicker questions (conceptual understanding, application, etc.) that lead to better exam performance or retention of knowledge?
The authors noted that the standard deviation on questions involving topics taught via clickers was smaller than on other questions. They point out that this was noted in a previous study, as well. This statistic might be worth examining in future studies about clickers.