Reference: Graham, C. R., Tripp, T. R., Seawright, L., & Joeckel, G. L. (2007). Empowering or compelling reluctant participators using audience response systems. Active Learning in Higher Education, 8(3), 233-258.
Summary: This article reports the findings of a survey of 688 students in twelve courses in seven disciplines (chemistry, biology, physics, psychology, education, statistics, and human development) piloting clickers at Brigham Young University. One of the central questions of the study was the impact of clickers on “reluctant participators,” students who tend not to participate actively in class for various reasons. Students were asked a few questions on the survey designed to determine if they were reluctant participators. Highlights of the survey results include the following:
- The primary reasons for negative student feelings about clickers were technical problems, cost of the devices, the use of clickers for grading, and the use of clickers to mandate attendance.
- The top four uses of clickers that the students valued were increasing student participation, helping student assess their own understanding, building mutual awareness among students, and adding some fun to class.
- The researchers found that students who prefer courses with limited student participation were less positive about the helpfulness of clickers.
- Other types of reluctant participators (those interested in peers’ opinions but hesitant to share their own and those hesitant to ask questions when they don’t understand the material) did not find clickers more or less helpful than non-reluctant participators.
The authors’ primary recommendation based on these results is to use clickers to “empower” rather than to “compel” students to engage with course material during class. “Pedagogical strategies that allow students to respond and get formative feedback on their own performance are viewed as more helpful than strategies that force participants and are used for grading purposes.” (p. 249)
Comments: This survey generated rich data and is a useful model for similar clicker assessment projects at other schools. Both faculty and student surveys are included in appendices. The results about students’ negative feelings about clickers and reasons they find clickers useful are consistent with other research and with anecdotal faculty comments I’ve heard.
The authors’ recommendation to “empower” rather than “compel” is a sensible one. Instructors using clickers to take attendance and to conduct graded quizzes should take care to do so in ways that show students the value to their learning of these activities. For instance, those using clickers to conduct graded quizzes might review those quizzes in class immediately after completing them. The use of clickers in this situation allows instructors to focus on questions most missed by students and helps instructors provide very timely feedback on student performance. If such a quiz were graded after class and reviewed in the next class, the quiz questions would not be as fresh in the students’ minds and the students likely would not benefit as much from the review.
I didn’t find the result about students who prefer classes without student participation not liking clickers to be particularly surprising. It’s consistent with faculty comments I’ve heard and perhaps explains the reactions of certain MIT students to active learning techniques used there.
The results about the other two types of reluctant participators (those hesitant to share their opinions and those hesitant to ask questions when they don’t understand the material) were a little more surprising. I would have expected these students to appreciate clickers more. The fact that they did not like clickers less than their peers is probably important here. These are aggregate results across twelve courses, however, so it’s possible that the response of these kinds of reluctant participators to clickers depends on classroom variables not investigated by this study.