Article: Len (2007)

Patrick M. Len recently commented on an earlier post about clicker question banks to share links to his blog, where he regularly posts astronomy and physics clicker questions he has used.  Since he was kind enough to share those links and to make his clicker questions available online for others to use, I thought I would take a look at one of his recent articles on classroom response systems.

Reference: Len, P. M. (2007). Different reward structures to motivate student interaction with electronic response systems in astronomy. Astronomy Education Review, 5(2), 5-15.

Summary: In this study, Len explores the impact of two different “reward structures” used for clicker activities in a medium-to-large astronomy survey course at Cuesta College:

  • Introductory questions were asked the start of class. These questions were graded on effort, not accuracy of student responses. Students were allowed to discuss their answers with each other before voting. Some did, and some did not.
  • Review questions were asked at the end of class. These questions were graded on effort, as well, but if at least 80% of the class answered the day’s questions correctly, those participation points were doubled. This led to some raucous class-wide discussions about the questions.

Sample questions of each type, many of which are conceptual understanding or application questions, are available online in appendices to the article.

Individual students were identified via their responses to a survey as independent workers (“self-testers” in Len’s terminology) or collaborators during the introductory questions. Two pre/post instruments were used to explore differences in these two types of students: the Survey of Attitudes Toward Astronomy (SATA) and the Astronomy Diagnostic Test (ADT).

One key finding of the study was that collaborators (those students who chose to work together to answer the introductory questions) became less confident in their astronomy knowledge and skills and valued astronomy less over the course of the semester, as measured by the SATA. Collaborators also “reported a lower pretest proficiency in science,” according to the ADT, even though they were as accurate in their answers to introductory questions as their self-tester peers.

Len concludes that this one-semester course in astronomy had a significant, negative impact on the beliefs and attitudes about science of these students. He recommends that since these students are “predisposed toward collaborative behavior,” instructors should think carefully about how to use clickers to structure collaborations in ways that increase student confidence and help them value astronomy more.

One other finding was that the helpfulness of the instructor’s lecture in student learning was rated more highly by the self-testers than the collaborators. This complements other findings (Graham, Tripp, Seawright, and Joeckel, 2007) that students who prefer not to participate find clickers less helpful.

Commentary: There’s a lot of data here to make sense of, but I think Len has successfully argued that students who self-report that they aren’t as good at math and science as their peers (a) prefer to collaborate when given the opportunity and (b) became less confident in themselves and less positive toward science during this course. His recommendation to structure collaborative activities (with or without clickers) in ways that are sensitive to these affective issues is a sound one.

Along those lines, it’s possible that the attitudes and beliefs about science held by the collaborator students would have worsened more over the course of the semester had they not been allowed to collaborate on introductory questions. If they had not been allowed to do so, they likely would have done more poorly on these questions (instead of answering them as accurately as their self-tester peers), which in turn would have discouraged them more.

This issue of students in physics and astronomy courses becoming less interested in science because of these courses has been reported elsewhere in the Physics Education Research (PER) community (notably by Carl Wieman’s research groups at the University of Colorado and the University of British Columbia), and I think it’s an important challenge in science education. I’m glad to see this article by Len helping to explore this issue.

Len’s central question–the impact of different reward structures on students in his courses–is only partially answered, in my opinion. It’s clear that the “success-bonus” reward structure used for the review questions encouraged students to collaborate. However, given the way he describes the class environment when students answer his review questions (“Some students shouted for assistance from the rest of the class; others attempted to coach the rest of the class on how to answer, indicating their answer on the overhead projector using fingers, on the screen using laser pointers, or vocally”) it’s unclear the extent to which critical reasoning, as opposed to persuasion and peer pressure, was a factor in these collaborations. An investigation of more structured approaches to implementing this reward structure would be beneficial.

As usual, your comments are welcome!

Leave a Reply

Your email address will not be published. Required fields are marked *