Article: Webking & Valenzuela (2006): Clickers and Critical Thinking in Political Science

Reference: Webking, R., & Valenzuela, F. (2006). Using audience response systems to develop critical thinking. In Banks, David A. (Ed.), Audience Response Systems in Higher Education: Applications and Cases. Hershey, PA: Information Science Publishing.

Summary: Webking and Valenzuela describe ways they use classroom response systems in their political sciences courses at the University of Texas-El Paso to foster critical thinking through active participation and class discussions. After noting some commonly cited advantages of teaching with clickers—easier attendance and participation record-keeping, greater participation through anonymity and accountability, and the collection of data to inform agile teaching decisions—the authors provide several concrete examples of clicker questions they have found valuable for developing their students’ critical thinking skills.

The authors’ first example is a sequence of clicker questions that serve to guide students through a close reading of a few passages in the play Antigone. At one point in the play, Antigone makes a statement that seems to very clearly express her belief that obedience to the gods trumps obedience to the king. At another point, however, she makes a somewhat cryptic statement that calls this previous assertion into question. Webking and Valenzuela start with an understand-level question that asks students to clarify this second statement. They follow this with an application-level question asking students to identify a logical consequence of her cryptic statement, one which seems to run counter to her earlier statement about serving the gods. Their third question is an analysis-level one, and it asks students to reconcile the two seemingly contradictory statements by Antigone by identifying a hidden motivation of hers that makes her statements consistent.

Webking and Valenzuela also describe how they use a particularly challenging, analysis-level question about Plato’s Euthyphro. The question asks students to identify the central argument of a particular passage, one that deals with the relationship between justice and piousness. The question is one that Jean McGivney-Burelle would call a “horizontal question” since students answering the question are typically split evenly among three answer choices. Webking and Valenzuela note that one of the three popular responses can’t be supported by the text. Students who argue for this answer choice quickly realize that they were projecting their own perspectives on the text, not arguing from the text. This is a useful metacognitive moment for these students. The class discussion then focuses on the remaining two popular answer choices. Making sense of these two choices requires the students to grapple with categorical logic, the kind that is well-represented by Venn diagrams. Once the students have discussed their way to the correct answer, they realize the value of categorical logic in making sense of arguments like the ones Plato makes—another metacognitive moment.

The Plato example comes from one of the authors’ smaller, upper-level courses, and they assert that “it is in a smaller class that the [classroom response] system is at its best in encouraging discussion and precise argument.” They reach this conclusion, in part, because of the ability of their classroom response system to report to the instructor individual student responses to clicker questions as those responses are submitted. The authors use these individual, real-time results to guide their post-vote discussions, focusing on “groups which had difficulties in reaching consensus, students or groups which answered particularly quickly or particularly slowly, students who disagreed with their groups, students who changed their minds, and so on.” They argue that the ability to see individual, real-time results is important in leading effective post-vote discussions since it allows instructors to analyze “each student’s rational odyssey with each question.”

Also in the article are two examples of student perspective questions the authors use to motivate particular topics in their courses. In one example, they ask students to identify questions they aren’t likely to ask someone they’ve just met. Invariably, students identify the questions about religion and politics. The authors point out to students that one reasonable conclusion from this is that religion and politics are the least important things to know about when getting to know someone. This motivates students to want to learn why this social phenomenon exists.

Comments: This would be a great article to give a faculty member in political science or philosophy who’s interested in getting started teaching with clickers. Webking and Valenzuela provide a concrete, interesting example of a guided close reading of a text (Antigone) using clicker questions of increasing difficulty. This is a great model for instructors in the humanities and social sciences interested in helping their students develop critical thinking and close reading skills. I wish, however, that they had included some voting data in this example and had discussed how they use the results of these questions to guide discussions, as they did with their Plato example.

The Plato example is a great model of clicker use in text-based courses, too. One reason is that the approach Webking and Valenzuela use leads students to appreciate the nature of argument in their discipline. They write, “In time, and actually not very much time, students learn to care more about the strength of the argument than about having their initial position defended as right.” The authors present a useful list of options for leading these kinds of class discussions—focusing on groups that were conflicted, students who answered quickly or slowly, students who changed their minds, etc.

The authors assert that the quality of discussions they can foster depends on the availability to the instructor of real-time, individual voting data. Not all classroom response systems have this feature and, in my experience, instructors who have the option of looking at individual results as they come in don’t frequently take advantage of this option. I think that perhaps the availability of real-time, individual results isn’t as critical as Webking and Valenzuela assert. I’ll often have my students vote on a question individually, then discuss it in groups, then vote again. I’ll sometimes ask for a student who changed his or her mind from the first vote to the second vote to explain his or her reasoning. I can also see asking for a student who disagreed with his or her group to contribute to the post-vote discussion.  (That’s a nice idea, one that I’ll have to try soon!)

My approach, using the aggregate and not individual voting data, relies on students who fit certain profiles volunteering to share their perspectives with the class. Webking and Valenzuela’s approach doesn’t rely on volunteers, but it isn’t quite cold-calling, either, since they select students only after the students have had a chance to consider and respond to the clicker question. I’d like to call this “warm-calling” since the students have had a chance to warm up to the question and since the instructors aren’t calling on students without any knowledge of what those students might contribute to the discussion. I’m not familiar with many instructors who practice warm-calling.  If you do, I’d love to hear from you in the comments about your experiences doing so.

Image: “Coffin Sculpture of Antigone” by Flickr user Xuan Rosamanios / Creative Commons licensed

Leave a Reply

Your email address will not be published. Required fields are marked *