Reference: Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977.
Summary: In this now-classic article, Catherine Crouch and Eric Mazur present data on ten years of the use of peer instruction in introductory physics courses. Included is a description of Mazur’s teaching practices for these courses, including ConcepTests (multiple-choice questions that help students develop conceptual understanding independent of computational skills), pre-class reading quizzes (used to motivate students to read their textbooks before class, allowing Mazur to shift the transfer of information outside of class, freeing up classtime to work on the assimilation of information), and peer instruction with and without clickers.
For assessment, Crouch and Mazur compare student performance on pre- and post-tests–the Force Concept Inventory (FCI), a widely used multiple-choice test of conceptual understanding in first-semester physics–before Mazur began using peer instruction and after. They use normalized gain as their metric, which is determined by the formula (post-pre)/(100%-pre). Thus, if a student scores a 70% on the pre-test and an 80% on the post-test, their normalized gain is (80-70)/(100-70), which is approximately 0.33. Another student who moved from a 90% to a 95% would have a gain of 0.5, indicating that the student gained 50% of the improvement s/he could have gained from pre-test to post-test.
Using normalized gain on the FCI as a metric enables Crouch and Mazur to make comparisons to national data. In Richard Hake’s 6000-student study of “traditional” and “interactive” physics courses, the average normalized gain for students in traditional courses was 0.23, whereas the average for students in interactive courses was 0.48, a very significant difference. The semester before Mazur started using peer instruction, his normalized gain was 0.25, consistent with Hake’s findings for “traditional” lecture courses. The first semester Mazur used peer instruction, his normalized gain was 0.49, also consistent with Hake’s findings.
Perhaps most interesting is that as Mazur gained experience with these teaching methods (and made refinements to them, like the replacement of flash cards with clickers in his second year using peer instruction), his normalized gain increased by several percentage points each year, hitting 0.74 the sixth time he implemented peer instruction. Thus he was, in a sense, three times as effective in helping his students master concepts in first-semester physics.
Comments: I tend to review more recent articles on teaching with clickers on this blog, but I couldn’t resist posting something about this classic article. Mazur’s peer instruction technique is the most commonly used approach to teaching with clickers, and that’s in large part to the persuasiveness of the data he has collected on its impact in his courses. This article presents solid evidence that having students read their textbooks before class and grapple with tough conceptual understanding questions during class in small groups is a superior way to teach first-semester physics.
It’s also worth noting that Mazur’s normalized gain improved over time. I’ll occasionally read an article by an instructor who taught a section of a course with clickers and a section without and student performance in the two sections to find that using clickers had little or no impact on student performance. These experiments often have a variety of design problems, but, regardless, it’s important to note that instructors can improve in their use of a particular teaching method over time. Expecting great results the first or second time out is sometimes unrealistic, and big learning gains are sometimes only possible after a few semesters experience.