Article: Lucas (2009)
Here’s part two of student participation week. I’ll be blogging all week about recent research on the impact of teaching with clickers on student participation in class.
Reference: Lucas, A. (2009). Using peer instruction and i>clickers to enhance student participation in calculus. PRIMUS, 19(3), 219-231.
Summary: In this article, Lucas assesses his use of clicker-facilitated peer instruction in his calculus courses. Lucas has his students respond individually to clicker questions, then displays the results to the class (as a histogram), then has the students discuss the questions in small groups prior to a second vote and classwide discussion. His grading scheme sounds high-stakes initially, since students receive only half-credit for wrong answers, but since he only uses clicker grades when students’ numerical course grades fall between two letter grades, the stakes are actually fairly low. (According to the article I discussed yesterday, James (2006), this should encourage balanced peer discussion.)
There was a moderately strong correlation between students’ clicker scores and their overall course grades (r=0.57). Lucas notes that this means that instructors might take advantage of clicker scores early in the semester to identify students who are struggling in a course. Homework scores are not only more effort to obtain but were correlated less strongly with course performance in Lucas’ case.
Based on end-of-semester student surveys in two calculus courses, one featuring clicker-facilitated peer instruction and the other taught in a more traditional manner, students who participate in peer instruction activities place a greater value on student-student learning (as opposed to instructor-student learning) than students who do not.
Lucas was interested in exploring the impact of the instructions he gave his students on their participation in peer discussion. He videotaped two tables of eight students each discussing a particular question. One table was given no instructions; the other was told to first discuss the question in detail in pairs using pencil and paper to explain their answers to each other and then discuss the question with other students at their tables.
The students that were given no instructions deferred to one of the “high status” students at the table even though that student was incorrect instead of defending their own, correct answers. Lucas defined a “high status” student for the purposes of this student as one who ended up with a B+ or higher in the course, assuming “that students receiving high grades were regarded by their peers as having higher status.” Furthermore, Lucas states that at this table, “there was very little mathematical dialogue” in the time allocated for discussion.
In contrast, at the table where students were given instructions to discuss the question in pairs using pencil and paper, the video indicated that the students spent most of the discussion time doing exactly that. Furthermore, for the two pairs at the table that consisted of one high status student and one non-high-status student, the non-high-status students contributed to the pair discussions. In each case, both students were initially incorrect (with the same wrong answers) but through balanced discussions that involved mathematical reasoning communicated in writing were able to arrive at the correct solution.
Lucas concludes that the instructions given to students prior to peer instruction impact the nature of the peer discussions and that in a math class, encouraging students to discuss clicker questions using pencil and paper enhances the quality of those discussions.
Comments: James (2006), the subject of yesterday’s post, argues that the grading schemes used with clicker questions impacts the nature of the discussions that occur during peer instruction time. Lucas here argues that the instructions teachers give students for peer instruction time are also important. I think Lucas is onto something here, although his argument is weakened by the fact that he only analyzed the discussions among two groups of students about a single clicker question. Further studies are necessary, I think. It would be fairly easy for Lucas and other instructors to vary the instructions they give students prior to peer instruction, then see which sets of instructions lead to greater convergence to correct answers from the first vote to the second vote.
I think Lucas’ findings were enhanced by his use of video, however. Video- or audio-taping student conversations provides a useful tool for better understanding the nature and dynamics of peer discussions. James’ results are certainly stronger because of his analysis of such audio-recordings.
There are other factors that might impact the nature of discussions during peer instruction time, of course. Eric Mazur and Nathaniel Lasry, in particular, have mentioned the display of the results of the initial clicker vote as one potentially important factor. If there’s consensus around a single response (right or wrong), students seeing the histogram might assume that the popular answer is the correct one and thus, assuming they understand the correct answer, disengage from subsequent discussion of the question. Thanks to Mazur’s and Lasry’s observations as well as my own, I’ve been much more intentional this semester about showing my students these initial results. There’s potential for a study of this factor, too.
Lucas’ definition of “high status” is a practical one, certainly, and a useful one, too, I think. James explored the connection between high-performing students and contributions to peer discussions in his study, too. There are other definitions of status, however. For instance, when I interviewed Edna Ross for my book, she described the some of the ways in which race and gender affect student-to-student discussions during peer instruction time. If better instructions and lower stakes help motivate lower-performing students to participate more meaningfully in peer instruction (as Lucas’ and James’ results seem to indicate), might these methods also help defuse some of negative ways that race and gender impact peer instruction? Given the results of Reay, Li, and Bao (2008), indicating that their clicker-facilitated question-sequence pedagogy reduced the performance gap between male and female students, the answer is quite possibly yes. There’s another study idea for you…
Two final comments: I like the idea that clicker scores might function as an easily obtained early warning indicator for students struggling in a course. Implementing this would involve scoring clicker questions on accuracy (for this purpose if not as part of students’ grades), as well as taking a look at individual student clicker scores early in the semester.
Also, Lucas’ finding that implementing peer instruction in the classroom leads students to value learning from their peers more is an interesting one. This result indicates that the teaching methods we use can have an impact on students’ metacognition, their learning about learning. And if you believe that student-to-student learning is valuable (as many do), then we can have a positive impact on our students’ metacognition by implementing peer instruction.
I’ll add here that Adam Lucas, the author of this article, and I will be facilitating a minicourse on teaching with clickers and classroom voting at the January 2010 Joint Mathematics Meetings in San Francisco. Math faculty interested in getting started teaching with clickers are encouraged to join us!