Reference: Jenkins, A. (2007). Technique and technology: Electronic voting systems in an English literature lecture. Pedagogy, 7(3), 526-533.
Summary: In this article, Alice Jenkins of Glasgow University in Scotland describes her use of clickers to teach an undergraduate poetry class with 110 students. Her primary use of clickers was for formative assessment immediately following a portion of a lecture on a particular topic, leading into “agile” teaching and class-wide discussion. Types of questions included the following:
- Application Questions – In the article, Jenkins focuses on using clickers to teach students metrical analysis of poems. She provides an example of this type of question, asking students to predict certain properties of the next line of a given poem, as well as an analysis of student responses to her example question.
- Critical Thinking Questions – She mentions asking her students “to assess certain formal qualities of poems” using a Likert scale and to choose an adjective that best describes a poem’s diction.
Jenkins also makes an interesting point about the use of the “hand-raising” method of answering questions in class. She asserts that this method works better for questions with two possible answers, since students can be asked to raise their hands for one answer and keep their hands down for the other answer. This allows students to answer a little more independently than the usual method of asking for a show of hands for each answer choice sequentially. However, she also mentions that non-participating students confuse the results here, since their lack of raised hands would be interpreted as “votes” for one of the two answer choices.
Jenkins is also fond of the “I don’t know” option (which isn’t possible with binary “hands-up” questions) since it discourages students from voting randomly (useful presumably because it encourages students to ask themselves, “How confident am I in this answer?”) and provides Jenkins with a sense of the difficulty level of a given question.
Jenkins was observed by a colleague during the classes in which she used clickers. The colleague reported that Jenkins had “asked for and received oral responses from the students 28 times.” Jenkins said she was “astonished” to hear this, presumably because this high level of interaction was unusual for this large course.
Jenkins also surveyed her students about the use of clickers. One interesting result was that 60% of her students said they “worked out the answers to all the questions” when clickers were used, versus only 10% without clickers. The primary uses of clickers students identified as beneficial were (a) helping students assess their own understanding, (b) allowing for anonymous responses, (c) helping the instructor assess student learning, and (d) increasing participation.
Commentary: This is the first published article I’ve found describing the use of clickers in a humanities class, so I was pretty excited to discover it. (Stuart, Brown, and Draper (2004), also from Glasgow University, describe the use of clickers in a philosophical logic course, but that kind of course is fairly unusual in the humanities.) I think there’s a lot of potential for the use of clickers in the humanities, and the interesting application and critical thinking questions Jenkins describes in this article are great examples of that potential. I hope that this article encourages others in the humanities to consider using clickers.
One of the reasons I think that instructors in the humanities have difficulty seeing value in the use of clickers is that their experience with multiple-choice questions is based on the use of such questions on exams, where they are usually factual questions. Asking these kinds of questions in class with clickers isn’t usually particularly exciting or useful, so I can understand why humanities instructors might not see value in clickers.
However, one can use clickers to ask “one-best-answer” questions that encourage critical thinking. For these questions, students are asked to choose from several answers, more than one of which has some merit. The point of these kinds of questions isn’t to find out if students can identify the “right” answer, since there are no “right” answers. Instead, the point is to engage students in a question (by asking all students to think about the question independently and commit to an answer) to lead into a rich class-wide discussion of the material. These kinds of “one-best-answer” questions don’t work well on exams (unless one requests that students defend their choices, I guess), but they can work very well in class.
Finally, I think it’s really interesting that 60% of Jenkins’ students said they thoughtfully considered questions asked via clickers versus 10% who said they would do so for questions not asked via clickers. I’m reminded of a student of Elizabeth Barkley‘s captured in a video Elizabeth shared at a conference I attended. In commenting on the Think-Pair-Share collaborative learning technique, the student said something like, “With Think-Pair-Share, I know I’m going to have to pair up and share my thoughts on a question, so I think about the question. If I know I’m not going to ‘pair’ or ‘share,’ then why should I ‘think’?” Jenkins’ survey results support this assertion!