Article: Lasry (2008)

Reference: Lasry, N. (2008). Clickers or flashcards: Is there really a difference? The Physics Teacher 46(4), 242-244.

Summary: Lasry reports the results of a study contrasting the use of clickers and flashcards in facilitating peer instruction in an introductory physics course.  Two sections of the course were taught in the same semester by Lasry.  In one section, students responded to multiple-choice, conceptual understanding questions using clickers; in the other they responded using flashcards.  In both sections, student responses to the questions were used to determine what followed the question–further explanations of the topic at hand by the instructor if most students missed the question, moving on to the next topic if most students answered correctly, or peer instruction otherwise.

Lasry administered the Force Concept Inventory to both sets of students at the start and end of the semester as an assessment of the students’ conceptual understanding.  The normalized gain, (post%-pre%/100-pre%), for the clickers section was 0.486, and for the flashcard section it was 0.520, not a statistically significant difference in this case.

Lasry’s conclusion is that “using peer instruction with clickers does not provide any significant learning advantage over low-tech flashcards.”  He notes that clickers might provide other advantages, such as enabling instructors to analyze student response data for the purpose of improving in-class questions over time and interesting other instructors in experimenting with peer instruction.

Comments: Lasry’s data are certainly interesting and provide some evidence that peer instruction works as well with flashcards as with clickers.  However, he describes the “contributions of clickers” as being “more on the teaching side than on the learning side of the educational equation.”  I find this separation of teaching and learning a little artificial.  The effects on student learning that any instructional technology has depend on how the technology is implemented.  There are a couple of ways of implementing clickers that have the potential to positively impact student learning that don’t appear to be addressed in this study.  These factors might explain the lack of difference in learning gains between the two sections.

For example, since clickers allow an instructor to track individual student responses, they can be used to hold students more accountable for their responses than they would be using flashcards, which has the potential to increase student motivation to participate and engage with questions asked during class.  It’s unlikely that student responses in the clicker section in this study were factored into student grades since tracking individual student responses in the flashcard section would have been impractical and Lasry apparently tried to keep as many aspects of each section constant as he could.  If that’s the case, then students in each section would have been similarly motivated to participate, which might explain the lack of difference in learning gains.  Had student responses to clicker questions been included in student grades in the clicker section, students might have performed better on end-of-semester assessments.

One of the points that Tim Stelzer made in his clicker conference keynote last November was that student participation tended to decrease over time when flashcards were used at the University of Illinois.  I would be interested in finding out if there was any difference in participation in the two class sections in Lasry’s study.  If there was not, then there might have been other factors, such as instructor experience or instructor-student rapport, that kept participation high in the flashcard section and offering another explanation why the clicker section didn’t exhibit greater learning gains.

Another implementation choice that has a potential effect on student learning is “agile teaching,” that is, using response data from clicker or flashcard questions during class to make teaching decisions.  In Lasry’s study, response data were used to determine when to engage students in peer instruction.  Such choices are likely most effective when based on accurate assessments of student learning.  As Stowell and Nelson (2007) showed, the flashcard method can lead to instructors overestimating their students’ comprehension since the method makes it possible for students to see other students’ responses as they select their own responses.  Clickers tend to provide more accurate feedback on student learning since they promote independent answering by students.  It’s possible that in the Lasry study, the flashcard method provided accurate enough assessments for the teaching choices that were made.  Other kinds of agile teaching choices might have benefited from the more accurate data provided by clickers.  The impact of clickers on agile teaching is an issue that hasn’t been studied well to date to my knowledge.

Finally, another way in which clickers might provide benefits over flashcard methods is that clickers make it easy for students to see the distribution of responses to a question.  Flashcards provide this distribution (in rough form) only to the instructor.  Seeing the distribution of responses has a potentially motivating effect on students, particularly when students find out that most of their peers answered a question incorrectly.  It’s unclear from the article the extent to which clicker or flashcard questions were used to generate “times for telling” in this fashion.  It’s possible that in classes where these kinds of questions are asked more regularly, clickers have a bigger impact on student learning because of the easy display of results to the class.

Leave a Reply

Your email address will not be published. Required fields are marked *