Article: Bunce, VandenPlas, & Havanki (2006)

Reference: Bunce, D. M., VandenPlas, J., & Havanki, K. (2006). Comparing the effectiveness on student achievement of a student response system versus online WebCT quizzes. Journal of Chemical Education, 83(3), 488-493.

Summary: In the context of a 41-student chemistry course for nursing students, after certain topics were introduced during lecture, students were asked to respond to questions about those topics delivered via a classroom response system (CRS).  (In this case, the system used software running on wireless-enabled laptops that were loaned to students for this purpose.)  After class, students were asked to complete online quizzes on class topics prior to the next class.  The impact of these activities on student learning was assessed through performance on instructor-written hourly exams and a standardized final exam provided by the American Chemical Society.

Since some questions on the hourly and final exams featured topics covered in the CRS questions, some in the out-of-class quizzes, some in both, and some in neither, the authors were able to assess the impact on student learning of the two review mechanisms.  Students did better on hourly exam questions tied to the online quizzes than other questions, indicating that the online quizzes did the best job of helping students do well on these exams.  Results from the final exam indicate that neither the in-class CRS questions nor the out-of-class online quizzes helped students do better on the final, however.

The authors report a couple of problems that undercut these results.  One was that as students responded to the in-class CRS questions, they were able to see the bar chart showing the results as they came in.  This meant that, as one student put it, “People who don’t know the answer simply wait for the graph [to enter their responses] and no real learning occurs.”

The other was that the online quiz questions were made available to students after the quizzes for study purposes, and, according to a survey of students, students took advantage of this resource.  The in-class CRS questions, however, were not made available to students for review.

Students reported on a survey that the most useful features of the CRS were that it helped students reinforce what they learned in class and that giving them the opportunity to talk about class material with their peers helped them learn the material.

Comments: This article is a useful example of how difficult it can be to design a study about classroom response systems that provides meaningful results.  I’m glad the authors published this account, even though their results aren’t strong.  I think others designing similar studies are likely to learn from some of the design mistakes made in this study, in part because the authors did a great job of exploring the constraints of their study design in the conclusion of the article.

One of the primary advantages of having students respond via clickers or similar classroom response systems is that students are able to respond to a question independently.  Each student is asked to respond before he or she finds out what his or her peers think.  This can increase the level of engagement of the students with the question and any subsequent discussion.  The “bandwagon effect” seen in this study, in which students wait to find out what their peers think and then pick the most popular response, is why asking for a show of hands can be unproductive.  Since the classroom response system in this study was used in a such a way as to eliminate this primary advantage, it’s difficult to draw meaningful conclusions from the study.

What is clear from this study is that the availability of clicker questions for students to review after class is a potentially important variable, one that should be included in future research on the effects of classroom response systems.  This study presents some evidence that this may be a key ingredient in the impact of clicker questions on student learning.  I hope that future studies take a closer look at this variable.

Studies like this one that compare classroom response systems with alternatives (like online quizzes or asking for a show of hands) can get complicated very quickly.  I thought I might list just a few of the variables at play in this study to illustrate how difficult it can be to isolate the effects of a CRS.

  • Independent Answers – In this case, students didn’t have to answer in-class questions independently; they could see their peers answers before responding.  This was probably not the case for the online quizzes; students likely had to respond on their own before seeing their peers’ responses.
  • Peer Instruction – Students in this study were required to pair up and provide consensus answers to the in-class questions.  Students worked on their own when responding to the online quiz questions.  This seems like a significant difference between the two methods.
  • Class Results – Students were shown the class results to the in-class questions.  It’s not clear if they were shown the class results to the online quiz questions, either online or in subsequent classes.  The impact of seeing these class results is something worth exploring in CRS research since these results aren’t typically immediately available when other response mechanisms are used.
  • Immediacy of Feedback – With the in-class questions, students found out immediately whether they answered the questions correctly.  It’s not clear from the article if students had to wait to receive feedback on their online quizzes.
  • Agile Teaching – In-class CRS questions that were missed by significant numbers of students were reviewed in some fashion during class, which meant that the “lecture” was responsive to student learning needs evidenced by the CRS.  It’s not clear if online quiz questions influenced class time in a similar fashion in this study.  The ability to practice “agile teaching” is a primary advantage of a CRS, so this is an important variable to consider.
  • Question Type – Not much is said in this article about the nature of questions asked in class via CRS or out of class via online quizzes.  The two sets of questions were judged to be of similar difficulty levels by a panel of four chemical educators, however.  The students appeared to answer the online quiz questions correctly about 88% of the time, so the questions weren’t that difficult.  The format, difficulty, and learning goals associated with questions are potentially important variables.
  • Availability for Review – The online quiz questions were made available to students for test review; the CRS questions were not.
  • Grading Scheme – The article doesn’t state how either the in-class questions or the online quizzes were factored into students’ grades.  Given the role grades play in student motivation, this is a variable worth noting.

These are some of the variables I look for when I read about studies exploring the impact of classroom response systems.  I hope this partial list will be of use to researchers reading my blog.

Leave a Reply

Your email address will not be published. Required fields are marked *