Article: Carnaghan & Webb (2007)

Here’s part three of student participation week.  I’m blogging all week about recent research on the impact of teaching with clickers on student participation in class.

Reference: Carnaghan, C., & Webb, A. (2007). Investigating the effects of group response systems on student satisfaction, learning, and engagement in accounting education. Issues in Accounting Education, 22(3), 391-409.

Summary: Carnaghan and Webb explore the impact of clicker usage on student learning and participation in four sections of an introductory management accounting course.  Three of the sections had 40 or fewer students; one had 72 students.  In two of the sections, clickers were used in the first half of the semester, then not used in the second half.  In the other two sections, clickers were only used in the second half of the semester.  This study design allowed the authors to investigate the effects of adding clickers to a course as well as taking them out of a course at the halfway point.

When clickers were used, students were encouraged to discuss their responses to the clicker questions before submitting them.  Once the votes were in, the results were displayed to the students in a histogram with the correct response highlighted.  If a significant number of students missed the question, “a student volunteer was asked to explain his or her response, followed by further discussion.”  If most students answered correctly (which was often the case since the average for the questions was 84% correct), it’s unclear what happened.  Presumably the instructor said a few words about the question and moved on.

When clickers were not used, the very same multiple-choice questions were used and students were again encouraged to discuss their responses before committing to them.  No polling mechanism was used, and thus neither instructor nor students were aware of the distribution of responses.  Instead, the instructor asked for a volunteer to answer the question.  If the volunteer was incorrect, more volunteers were enlisted “until either the correct answer was provided or the level of confusion indicated significant student difficulties.  The instructor would then display the correct answer, followed by further discussion and an explanation of the correct approach to solving the problem.”

To assess the impact of clicker use on student learning, scores on two midterm exams were compared.  Using clickers improved student performance on midterm exam questions that were related to the multiple-choice questions used in class, but only for the second midterm exam, not the first midterm exam.  According to student survey results, students perceived the use of clickers as helpful to their learning.

To assess the impact of clicker use on student participation, the authors looked at student survey results but also had a teaching assistant record how often each student asked the instructor a question or answered an instructor’s question during class.

  • Losing Clickers: Survey results indicated that students who had clickers taken away from them at the midpoint of the semester felt significantly less comfortable participating in the absence of the clickers.  However, these students actually asked more questions in the second half of the course.  They answered fewer questions, however.
  • Gaining Clickers: The students who initially did not use clickers felt slightly more comfortable participating with clickers than without, but this difference was not statistically significant.  When clickers were introduced at the midpoint of the semester, this group of students asked and answered fewer questions.
  • Overall: Taken as a whole, the students asked an average of 2.3 questions per student over the eleven observed class sessions when clickers were in use, but asked asked an average of 3.2 questions per student over the eleven observed class sessions when clickers were not used.  Similarly, the rate for asking questions was 5.9 questions per student with clickers, 6.2 questions per student without clickers.

As noted above, the average rate of correct responses for clicker questions was 84%.  The authors found a negative correlation between the percentage of students asking questions during class and the class score on the clicker questions.  As the authors state, “It appears that a [CRS] may discourage discussion in classes where the feedback from the system indicates a majority of students understand the concepts being reviewed.”

Comments: I first heard about this study in a July 2007 article on clickers in university settings in Maclean’s Magazine, a Canadian news magazine.  Here’s the quote that got my attention:

Even odder was the fact that students in the clicker classes interacted less with their professors by asking fewer questions. “It actually suppressed verbal participation,” says Webb, who was initially puzzled by the result. His subsequent theory: “I think students who got the questions wrong saw how many of their classmates got it right. If they were in the minority, they wouldn’t want to look foolish in front of their peers and they didn’t ask questions.”

I would argue that Webb is correct in the quote above, and that this study provides evidence not that the use of clickers discourages participation, but the use of clickers when correct answers are indicated on the display of results discourages participation.

Consider again what occurred in the clicker classes in this study.  When the results of a clicker question were displayed, the correct answer was indicated.  Some students probably assumed that since they knew the correct answer, they fully understood the question (whether or not they had responded correctly themselves) and so were less likely to ask questions or otherwise participate.  As Webb points out, students who answered the question incorrectly were probably less likely to participate at this point because they knew they were wrong and didn’t want to look foolish in front of their peers, particularly when they also knew that most of their peers answered the question correctly (as was often the case given the 84% success rate of students on clicker questions).

In the non-clicker classes, after students had time to think about and discuss the question at hand, the instructor called on volunteers to share their reasoning.  The students who joined in the discussion at this point did not have the correct answer confirmed for them by the instructor, and that uncertainty about the correct answer likely encouraged healthy discussion.  Students were more likely to engage in the discussion in order to find out the correct answer, and students with incorrect answers had no particular reason to stay silent during the discussion.

Thus, the use of a correct answer indicator quite possibly explains the differences in participation rates between clicker sections and non-clicker sections.  However, as I mentioned in yesterday’s post, just showing students the histogram of results of a clicker question might inhibit further student discussion of the question, particularly if one answer choice is far and away more popular than the others.  Students are likely to read such results as indicating the popular answer is the correct one and disengage in subsequent discussion for the reasons mentioned above.

The authors assert that the only differences between the clicker sections and the non-clicker sections in their study were “those related solely to characteristics of the [CRS] technology.”  That assertion is, for the most part, true.  However, two features of the technology–the display of results and the correct answer indicator–significantly changed the classwide discussion strategy implemented by the instructor.  As I have said on this blog before, one can’t study whether or not using classroom response systems enhances student learning or student participation.  Rather, one can only study whether or not using classroom response systems in particular ways affects those outcome measures.  Had the authors studied two types of clicker classes, one in which correct answer indicators were used and one in which they were not, then we might have a better answer to the question of the impact of this particular way of using clickers.

As it is, however, this article only provides evidence that the ways in which clickers were used in this study–displaying results along with correct answer indicators to students for clicker questions that were generally fairly easy–are not effective at encouraging student participation during class.

I’ll finish by noting that the preprint of the article I read back in 2007 is still available online.  However, the final published version of the article includes some very important revisions (particularly to the statistics used), so I would discourage readers of this blog from reading the preprint.

Leave a Reply

Your email address will not be published. Required fields are marked *