Article: Nagy-Shadman & Desrochers (2008)

Reference: Nagy-Shadman, E., & Desrochers, C. (2008). Student response technology: Empirically grounded or just a gimmick? International Journal of Science Education, 30(15), 2023-2066.

Summary: This article reports the results of a survey of 350 students (mostly elementary education majors) in 13 earth and physical sciences courses taught by five different instructors at California State University, Northridge (CUSN).

One interesting approach the authors took to their study was to include four questions from the National Survey of Student Engagement (NSSE) in their survey instrument. These questions were chosen, in part, because CUSN’s performance on these questions was below average among their peer institutions. It was apparently hoped that using clickers might help overcome this performance gap. That was indeed the case, as the NSSE questions indicated that students in these courses were more engaged that students in other courses at the institution in the following ways:

  • coming to class having completed readings and assignments,
  • receiving prompt feedback from instructors, and
  • working harder than students thought they could to meet the instructors’ standards.

Nagy-Shadman was one of the five instructors whose classes were studied. She apparently made frequent use of conceptual and application questions.  She frequently used clickers to facilitate peer instruction and classroom games and often discussed correct and incorrect answers to clicker questions after voting.  In one of her games, each team of students not only submitted team answers but were rewarded for responding more quickly than other teams, an activity that apparently worked well for exam preparation.

It is unclear from the article what kinds of questions and activities the other four instructors implemented.  Many of the outcomes measured in this survey are likely dependent on instructor variables, including implementation choices regarding clickers, that are largely not discussed in the article. One exception is that the article provides evidence that instructors with more experience teaching and with more experience using clickers had students who were generally more positive about the use of clickers.

Notable survey results where students were in almost complete agreement, regardless of instructor, included the following.

  • Clickers had a neutral or beneficial effect on students’ attendance and tardiness.
  • Clickers were not particularly effective in fostering students’ problem-solving skills, relative to other effects.  (One should note that it’s not clear from the article what kinds of clicker questions were asked by the instructors of the courses surveyed.  Question type would have an impact on this measure.)
  • Clickers were not particularly effective in preventing “daydreaming” by students during class, relative to other effects.  (This result was true across all the courses surveyed, indicating that even the highly interactive uses of clickers employed by Nagy-Shadman didn’t eliminate student daydreaming.)

Following is a list of aspects of clickers that students liked, taken from an analysis of the student responses to open-ended questions on the survey.

  • Clickers added “variety / fun / interaction” to class.
  • Clickers allowed for anonymous responses.
  • Clickers provided immediate feedback to students on their learning.
  • Clickers helped students review material.
  • Clickers were easy to use.

Student complaints were focused on technical difficulties, “taking too much of class time” with clicker questions, “waiting for other students to answer,” and “difficulty reading the screen from the back of the room.”

Additionally, the article features well-researched discussions of the history of classroom response systems, writing multiple-choice questions, and the role of feedback in learning.

Comments: I think the use of NSSE questions here was a great choice, in part because the results are stronger because of the ability to compare them with national data.  It is not clear from the article, however, whether the NSSE results varied across the five instructors included in the study. It would be interesting to know if particular instructors or particular approaches to using clickers were more or less effective in improving these measures.

The finding that clickers improve attendance and reduce tardiness is a positive one.  However, it would have been useful to know what attendance and participation policies were used by the instructors in this study. Given that complaints about grading and monitoring were not that common in the students’ responses to open-ended survey questions, it’s unlikely that clickers were used to take attendance or judge participation in all five instructors’ courses. (See Graham et al. (2007) for a similar study where students complained about these issues.) If that’s the case, then this is an encouraging result—that students feel more likely to attend even without tracking clicker responses.

The authors summarize advice from the literature on writing multiple-choice questions.  However, I would argue, that the existing literature on writing multiple-choice questions (for quizzes and tests, for the most part) is only partially useful for writing in-class clicker questions. There are a variety of types of questions that function well in class that would not function well on summative assessments, including questions designed to help students explore new topics, questions for which there aren’t single correct answers, student opinion and personal experience questions, questions about the teaching and learning process, and questions that ask student to assess each other’s work.  This is one reason that instructors can have difficulty imagining how clickers might work in their classes: they’re used to writing multiple-choice questions for exams, not for in-class use.

Leave a Reply

Your email address will not be published. Required fields are marked *