Article: King & Joshi (2008)

Reference: King, D. B., & Joshi, S. (2008). Gender differences in the use and effectiveness of personal response devicesJournal of Science Education and Technology, 17(6), 544-552.

Summary: In this paper, King and Joshi present the results of a study of student participation and performance in two semesters of a chemistry course for engineering students with a particular focus on gender differences.  In the first semester, one section of the course used clickers without including clicker questions in the students grades in any way, while the other two sections did not use clickers at all.  In the second semester, only one section of the course was offered.  Clickers were used in this section, and clicker questions contributed to a participation grade for the students (5% of the overall course grade, with full credit awarded to students who answered at least 75% of the clicker questions throughout the semester, correctly or not).

The authors found that in the first semester’s clicker section, when clicker questions were not included in student grades, there was a statistically significant difference in the response rates of male and female students.  Female students answered 62% of clicker questions on average, whereas male students only answered 48% of questions.  In the second semester, when clicker questions were included in students’ grades, there was no significant difference in the response rates of male and female students.

The authors also found that students who were “active participators” (those who answered at least 75% of clicker questions in a semester) had higher final grades than students who were not active participators.  This difference was statistically significant for male students, but not for female students, however.  These results suggest that although male students participated less frequently than female students, male students who were active participators benefitted more from participation via clicker questions than female students.

The differences in final grades between active and non-active participators were consistent whether or not clicker questions were graded.  The authors conclude from this that “while the average grade improvement was the same during each term, the benefit of requiring clicker usage is that a greater number of students receive this benefit when participation is tied to their course grade.”  This argues for grading clicker questions, particularly for male students, who not only participate less when clicker questions aren’t graded, but also appear to benefit more from being active participators.

The authors also looked at student performance on final exam questions that were “related” to clicker questions asked during the semester.  As the authors expected, students who answered clicker questions correctly tended to do better on related final exam questions.  More surprising was that students who answered clicker questions incorrectly also did better on final exam questions than student who didn’t respond to the related clicker questions at all, indicating that class participation via clicker questions helped prepare students for exams.

It is worth noting that the correlation between answering clicker questions incorrectly and doing well on related final exam questions was not observed in the second semester clickers were used.  Recall that in the second semester, clicker questions were included in students’ grades.  The authors argue that this led some students to simply click in to earn participation points without really trying to answer the clicker questions.  Thus including clicker questions in students’ grades is likely to encourage more students to participate, but enough non-participating students are likely to respond to clicker questions (incorrectly in most instances) that the impact of clicker questions on student performance is harder to see in the student data.

Comments: In their literature review, the authors note that there is evidence that teachers tend “to ask questions of and praise male students more than female students.”  This potential bias is another good reason to use the pick-a-random-student feature of some classroom response systems.  Having the system select at random a student from among those who responded to a clicker question prevents this kind of bias.

King and Joshi’s main results–that students who respond to clicker questions (correctly or not) benefit from the participation and that grading clicker questions (on effort) leads to more students (particularly male students) participating in this useful way–are interesting and persuasive.  The authors did a good job of blending a quasi-experimental design (grading clicker questions in one semester, not grading them in another semester) with data collected within a single semester to argue these points.

Given the authors’ comments about students in the second semester just clicking in for participation points, I wonder if asking students for their confidence in their answers would have helped parse out these students to yield more meaningful data from that semester.  For instance, if students who weren’t really trying to answer clicker questions could be persuaded to signify that they had low (and not high) confidence in their answers, one could remove responses with low confidence from the data set to see if answering clicker questions incorrectly still had a positive correlation with success on related exam questions.

I should also add that it was unclear from the article what kinds of clicker questions were used in these courses–difficult ones, easy ones, recall questions, conceptual understanding questions, etc.  It was also unclear if students were asked to discuss clicker questions in small groups or as a class or if the instructors practiced “agile teaching,” responding in meaningful ways to the distribution of responses for particular clicker questions.  More description of these contextual factors would give the authors’ results more meaning.

King and Joshi’s results about gender–that male students tend to participate less frequently than female students (when not motivated by grades) and that male students who do participate benefit more in terms of performance on final exams–are also very interesting.  I’m reminded of findings shared by Hoekstra (2008) that male students tend to prefer to respond to clicker questions on their own, whereas female students tend to collaborate with other students prior to answering when given the option.  Hoekstra found that the male students liked to test themselves by seeing if they could answer a clicker question without external help.  It’s unclear from the King and Joshi article if students were allowed or required to discuss clicker questions prior to responding to them, but if students were given the option of responding on their own, it might be male students who self-test via clicker questions (whether voluntarily or when prompted to do so by including clicker questions in course grades) benefit from doing so, leading to greater learning gains for these participating male students.

Leave a Reply

Your email address will not be published. Required fields are marked *