Article: James (2006)

It’s student participation week here on the blog.  Last week I grabbed a handful of articles on classroom response systems to read while proctoring my linear algebra exams, and, as it turned out, all the articles dealt with the impact of teaching with clickers on student participation.  I’ll be blogging about these articles all week.  Enjoy!

Reference: James, M. C. (2006). The effect of grading incentive on student discourse in peer instruction. American Journal of Physics, 74(8), 689-691.

Summary: James looked at the impact of grade incentives on student participation in two introductory astronomy courses.  The “high-stakes” course was a general intro course for non-majors with 180 students in which clicker questions “counted for 12.5% of the overall course grade… and incorrect responses earned one-third the credit earned by a correct response.”  The “low-stakes” course was a course on space travel and the possibility of extraterrestial life for non-majors with 84 students in which clicker questions “counted for 20% of the course grade and incorrect responses earned as much credit as correct responses.”

In each course, a few clicker questions were asked in each class session.  Students were asked to discuss each clicker question with a neighbor before responding individually to the question.  The peer instruction conversations between at least two dozen students in each course were audiotaped on three separate occasions (near the beginning, the middle, and the end of the semester), and each statement made by these students was coded using a set of ten categories (restating question elements, stating answer preference, providing justification for a way of thinking, and so on).

Key findings from the discourse analysis and other data include the following.

  • In the high-stakes classroom, conversations within pairs of students tended to be dominated by one of the two partners.  Furthermore, the dominant partner was typically the student who ended up with the higher grade in the course.  These correlations were not present in the low-stakes course, where conversations within pairs tended to be more balanced with each student contributing.
  • In the high-stakes classroom, conversation partners responded with different responses to clicker questions only 7.6% of the time.  In the low-stakes classroom, they did so 36.8% of the time.  James concludes that “when there is a grading incentive that strongly favors correct responses to CRS questions, the question response statistics displayed by the CRS after each question may exaggerate the degree of understanding that actually exists” and thus impede agile teaching that responds to student difficulties.

Comments: I was impressed with James’ analysis of audio-recordings of student conversations during class.  I think this qualitative research method is a powerful way of “uncovering” learning dynamics within the classroom.  The results of his analysis provide convincing data for his assertion that “the grading incentives instructors adopt for incorrect question responses impacts the nature and quality of the peer discussion that takes place.”

James’ finding that student conversations were more balanced in the lower-stakes classroom is an important one for instructors to consider when determining their grades schemes for clicker questions.  His finding that in the high-stakes class, dominant students tended to be students who ended up with higher grades in the course, however, makes me wonder if students dominated because they had a better grasp of the material or if they ended up with a better grasp of the material because they contributed so much during peer discussions.

Had the students been asked to respond individually to the clicker questions before peer instruction, data from those initial votes could have been used to settle this question, I think.  If the dominant students usually answered correctly before peer discussion, then it’s more likely they dominated because they were right.  If the dominant students didn’t answer correctly at higher rates than the other students, then it’s likely the dominant students worked out the correct answers through contributing to the peer discussion.

This is an important question because it points toward an assumption that I believe many readers of James’ article will make, that the more students are able to contribute to peer discussions, the more they learn.  I don’t believe James actually makes that assumption here; he’s simply describing the effects of grading incentive on who talks more.  However, there’s a large body of research that supports this assumption, so it’s a reasonable one to make.  Under the assumption, it’s a good thing if more students contribute to peer discussions, so instructors should use lower-stakes grading schemes.

Surprisingly, in the low-stakes classroom, student exam scores weren’t correlated with contributions during peer instruction.  This result seems to undercuts the above assumption that discussion is useful to student learning.  Perhaps this lack of correlation is more of a statistical issue, however.  It could be that students in the low-stakes classroom all did pretty well on the exams, which would account for a weak correlation.  For that matter, if these students all contributed at similar levels, that would weaken any correlation, as well.

The finding that student pairs in the low-stakes classroom more frequently submitted different answers is a very useful one.  Practicing agile teaching by responding to the results of a clicker question will more likely enhance student learning if the results of the clicker question are accurate.  That’s why a classroom response system is a more useful response mechanism than hand-raising or flash cards, according to Stowell and Nelson (2007).  This is a strong argument in favor of low-stakes clicker questions.  If it is indeed the case that most students did pretty well in the low-stakes course, it might be because their instructor had better data with which to make agile teaching decisions.

This raises another reason that having students respond to the clicker questions individually before peer instruction would have provided a useful source of data for James.  Had students who initially missed clicker questions in both courses ended up doing better on exams in the low-stakes course, this would have potentially provided evidence for the agile-teaching effect, the contribution-to-discussion effect, or both.

I think James does a good job of not overstating his results, as compelling as they are.  It’s important to point out, however, that this wasn’t a control group experiment.  The topics of the two courses (and thus the nature of the clicker questions) were different, as were the instructors.  Both of these elements could explain the differences in participation, independent of the grading incentives used.

Update: Mark James emailed me after reading this post and suggested that exam performance can help distinguish between more knowledgeable and less knowledgeable students for the purpose of analyzing peer instruction conversations (as he did in this study) but, given that exam performance is a measure of general knowledge of course content, it is less useful in assessing the specific impact of clicker-facilitated peer instruction on student learning, particularly given that only a subset of exam questions were on topics similar to those explored during clicker questions.  This makes sense to me and would explain the lack of correlation seen in the study between contributing to peer instruction discussions and exam performance.

James also pointed out a follow-up study, which I plan to read and blog about in the future.  Here’s the reference:

James, M. C., Barbieri, F., & Garcia, P. (2008). What are they talking about? Lessons learned from a study of peer instruction. Astronomy Education Review, 7(1).

Leave a Reply

Your email address will not be published. Required fields are marked *