Article: Hoekstra (2008)

And here’s the fifth and final part of student participation week here on the blog.  In some ways, I’ve saved the best for last.  Today’s paper is by Angel Hoesktra of the University of Colorado-Boulder.  When I interviewed Angel for my book, I got a preview of some of her findings and I was impressed by her use of qualitative research methods to explore student perceptions of learning with clickers, so I was expecting interesting and useful results from her published work.  Having finally gotten around to reading that published work, I’m glad to say my expectations have been met.

Reference: Hoekstra, A. (2008). Vibrant student voices: Exploring effects of the use of clickers in large college courses. Learning, Media, & Technology, 33(4), 329-341.

Summary: Hoekstra investigated student perceptions of learning with clickers in multiple sections of a general chemistry course over a period of three years.  Most of the students in the course (apparently between 200 and 300 students per section) take the course to fulfill degree requirements, but typically less than 10% of the students are chemistry majors.  Survey results of these students indicated that 80-90% of them are “concerned about whether or not they will pass the course” and less than 10% of them “would feel comfortable enough to respond to a professor’s question by raising their hand in the large lecture hall.”

Clicker questions were typically used in the course after 10-12 minutes of lecture to assess student understanding of the material just explained.  “Classic” peer instruction was not used.  Instead, each clicker question was asked once and students were encouraged (but not required or prodded) to discuss the questions with their neighbors before voting.  Instructions given to students for these discussions were fairly vague (e.g. “Feel free to work with your neighbors”).  Also, clicker questions were followed by instructor explanations, not classwide discussion.  Correct answers to clicker questions earned three points each; incorrect answers earned one point each.  Clicker questions contributed only 5% of the students’ overall course grades.

Hoekstra investigated student perceptions of clickers through three semesters of student surveys administered using clickers, observations of 27 class sessions, and in-depth interviews with 28 students averaging 56 minutes in length.  Since students in the initial round of interviews had favorable reactions to students, students with less favorable view of clickers were interviewed in the second round.  Observation note and interview transcripts were analyzed through in vivo coding, “a form of open coding designed to allow conceptual categories to emerge from the data.”

Key results are as follows:

  • Interviews indicated strongly that students pay more attention during lecture because they know that clicker questions are asked frequently during lecture.
  • Students stated in interviews that “they looked forward to times when they were able to talk with their peers” during clicker questions.
  • Observations revealed that students frequently voiced their reactions (positive, negative, surprised) to the display of results and answers of clicker questions.
  • Interviewees also indicated that the results displays provided them useful and regular feedback on their learning in the course.  Some even indicated that the clicker questions were most helpful when they answered them incorrectly since these were opportunities to resolve misconceptions.
  • About 15-20% of students chose not to engage in peer discussion of clicker questions.  The decision to engage was typically influenced by the difficulty of the clicker question and the student’s “affinity for working with others.”  Interestingly, during more difficult clicker questions, female students were more likely to engage in peer discussion than male students, who tended to use these questions as tests of their own understanding.  Other reasons for working alone included not having done the reading before class, a disinterest in hearing possibly incorrect explanations from their peers when the correct explanation would be forthcoming from the instructor, and the fact that accountability for peer interaction was difficult in the large class.
  • Most students found the general noise and activity levels in the classroom during peer discussion stimulating.  Some students found it distracting and would have preferred times of quiet as they answered clicker questions.
  • Many students felt that clicker questions increased their anxiety levels during the initial weeks of the course but as they became comfortable with the technology and with peer instruction, they found that the clicker questions decreased their overall anxiety about the course.

Hoekstra uses a quote from Trees and Jackson (2007) to summarize many of her findings: “The success of clickers is in many ways dependent on social, not technological, factors.”

Comments: Where to begin?  This study is a rich source of understanding the many ways that students interact with clicker questions and with each other during times of peer instruction.  I’ve briefly summarized the findings above, but the paper includes details, examples, and very illustrative quotations from student interviews.  I realize that some find qualitative research less meaningful than quantitative research, but I think the scope and rigor of Hoekstra’s work adds much credibility to her findings.

Before commenting on a few specific findings, I thought I might connect the teaching environment described in Hoekstra’s paper with some studies I blogged about earlier in “student participation week” here on the blog.  Given the results of Lucas (2009), the vague instructions given to students for discussing clicker questions might have reduced the quantity and quality of student participation in those discussions.  Since the grading scheme for clicker questions was both high-stakes (because correct answers earned three times as many points as incorrect answers) and low-stakes (because clicker questions only contributed 5% of the students’ course grades), it is unclear from the results of James (2006) and Willoughby and Gustafson (2009) whether or not this grading scheme would have enhanced or inhibited student participation.  Hoekstra’s work did not include a control group of any kind, so one can’t say if student participation in the courses she studied was less than or greater than it would have been under different conditions, but her results seem to indicate that most students engaged in meaningful and productive peer discussions in spite of the vague instructions given to them and the somewhat high-stakes grading scheme.

As for Hoekstra’s specific findings, I think they lend support to a statement I made in my book: “Knowing that a deliverable [a clicker question] may at any time be requested from students can help students maintain attention and engagement during a class session.”  Clickers make it easy to request frequent deliverables of students during class, and Hoekstra’s findings indicate that this is an important reason to use clickers.  Hoekstra’s findings also support other reasons I frequently provide for using clickers: sharing the results of a clicker question can enhance student engagement, clicker questions provide students with useful feedback on their learning, and clicker questions can be useful in structuring class time for students.

Hoekstra’s findings about gender and student participation are thought-provoking.  I’ve debated the importance of an initial, independent vote prior to peer instruction time with several instructors who tend to skip this initial vote (particularly my friends in the math department at Carroll College).  Hoekstra’s findings indicated that for difficult clicker questions, female students might not get as much out of an initial, independent vote as male students, whereas male students might not appreciate jumping straight into peer instruction without a chance to respond independently.

This certainly complicates the debate about initial, independent votes, as well as a teacher’s choices during peer instruction times.  I’ve frequently found it to be the case that the students in my class who hesitate to engage in peer discussion are male students (so Hoekstra’s findings ring true to me), and I typically prod these students to discuss clicker questions with peers.  I may not do that as much in the future given these results.

The situation is further complicated by the finding that some students don’t appreciate the noise and activity levels during clicker questions.  This point reminded me of Richard Felder’s work on learning styles.  Felder’s model distinguishes between active learners, those who prefer to learn through discussion and interaction, and reflective learners, those who prefer to think quietly first.  He makes the great point that traditional lectures do a poor job of supporting both types of learners, since they typically provide students with little time for discussion or quiet reflection.  The “classic” peer instruction model serves both types of learners well, however, since students are invited to respond to clicker questions first on their own, then after discuss them with their peers.  (Larry Michaelsen’s team-based learning model works similarly.)

The finding that some students decide not to engage in peer discussions because they want to wait to hear the correct explanation from the instructor was interesting, as well.  Some instructors (for example, Dennis Jacobs, who teaches chemistry at Notre Dame and is profiled in my book) are very intentional about having students surface and debate reasons for and against all of the answer choices to a clicker question.  The methods these instructors use can help students move away from merely taking notes and memorizing explanations and develop critical thinking skills.  It’s possible that the students in Hoekstra’s study who preferred waiting for their instructor’s explanations to peer discussion might have been motivated to engage in peer discussion and thus sharpen their critical thinking skills with more directive instructions and/or requirements on the part of the instructors.

It’s also worth noting that students who weren’t interested in hearing their peers’ incorrect explanations for clicker questions were also concerned about sharing their own incorrect explanations and thus confusing their peers.  Responding to those concerns might be an important part of motivating these students to engage in peer discussions.

I’ll finish my comments with a response to the quote from Trees and Jackson (2007).  I’ll agree that social factors are likely more important than technological factors in the success of teaching with clickers.  I would qualify that statement, however, to note that (a) the technology can enhance those social factors when used well and (b) the teaching choices that instructors make when using clickers can have a significant impact on those social factors.  As a result, we need not think of those social factors as out of our influence as instructors.

That’s the end of student participation week here on the blog.  That’s also likely the end of five-posts-in-a-week here, too!  I’ll return to my usual format next week, although I have found a few more interesting looking articles on student participation to read soon…

Leave a Reply

Your email address will not be published. Required fields are marked *