Thanks for the very thoughtful reply to my comments, Dr. Clemens. I agree that those in the humanities and those in the sciences often have trouble bridging the divide you identify here. However, since I work at a center for teaching (as you note), I’ve learned to discuss teaching and learning with instructors in a variety of disciplines, including the humanities. I have found that very few humanities instructors use clickers in their teaching, although, given what I understand of teaching and learning in those disciplines, I see great potential for using clickers there.
It’s also clear that we’re both responding, in a sense, to different ongoing conversations about education. I had to Google the term “SLO” to find out what you meant by it, for instance. (I think student learning objectives can be very useful at the course and program level, but perhaps the SLOs I’ve seen are different from the ones you’ve seen.) So there’s something of a disciplinary divide between us, but we’re also coming from somewhat different communities of practice and discussion.
As I mentioned in my earlier comments, it’s true that clickers can be used to generate data on student learning for administrators. However, that’s not their primary use, at least in higher education. (Your reference to data-hungry administrators makes me think you’re commenting more on the state of K12 education than higher education. My K12 experience is limited, so I’ll focus on my understanding of clicker use in higher education.) Most faculty who begin using clickers do so either because they want to know if students are following their lectures or because they want to motivate their students to engage in learning during class time. (You also seem a little wary of the term “engage.” I sometimes put it this way: I want my students to have their brains turned on during class. I think that’s a reasonable expectation.)
I think it’s important to note that the role multiple-choice clicker questions play during class is very different than the role multiple-choice questions play on exams. On exams, each question needs to have a single correct answer, otherwise grading them is somewhat meaningless. During class, clicker questions need not have single correct answers.
For example, I interviewed an English professor, Elizabeth Cullingford of UT-Austin, for my book. She’ll note a character’s actions in a text, then ask students to identify which of several possible motivations account for that character’s actions. There may be more than one reasonable response to this clicker question; in fact, sometimes all of the motivations listed are defensible. She asks the question not because it has a right answer (or because she needs data on students for some administrator) but because she wants each and every one of her students to consider the question at hand, evaluate the given alternatives, and commit to an alternative they feel capable of defending.
She then uses the distribution of responses (displayed on the big screen) to guide the discussion that follows. She’ll often focus on the least popular answer choice and argue in favor of that choice, playing devil’s advocate with the students. As she does, she practices the kind of exemplary teaching you describe here, modeling for the students the kinds of analytical thinking in which she wants them to engage.
Here’s where the “engagement” issue turns in to one of motivation: Since every student has considered the question and committed to an answer and since most of the students chose other answers (and all students are aware of this, given the bar chart shown on the big screen), students are more motivated to pay attention to Elizabeth’s modeling at this point. They’re likely to say to themselves, “I was sure the right answer was C, but she’s arguing for B. Why B? Why not C? Oh, I see–they both have merits. This question is more complex than I thought it was.”
This is the idea of creating a “time for telling,” as it’s known in the educational literature. You can model critical thinking for students, but if the students are ready (cognitively and affectively) to follow and make sense of that modeling, it’s not nearly as effective.
In Elizabeth’s case, she’s teaching big classes–200 students at once. It’s unfortunate, because you’re right to point out the power of small classes. Because of her class size, Elizabeth rarely leads a whole-class discussion of a clicker question. However, she could, and instructors in other classes frequently do. They’ll take a look at the bar chart and say, “It looks like choice B was a popular one. Let’s hear from a few students their reasons for selecting B.” Then the students are called upon to defend their choices, which engages them in the very critical thinking I believe you value. In fact, a good discussion leader will, at this point, help the students debate the question among themselves instead of stepping in and “giving away” the right answer.
(I interviewed chemistry professor Dennis Jacobs of Notre Dame for my book, and he’s an expert at helping his students focus on correct scientific reasoning in this way. He waits until the very end of a healthy class discussion before confirming the right answer to a clicker question. At that point, most of the students are already convinced of the correct answer because of their peers arguments for it.)
Again, the clickers serve to enhance this kind of discussion. Every student has been asked to commit to answer, so more students are ready to contribute to such a discussion. Moreover, the results of the clicker question can often encourage more students to participate. A student might think, “It looks like 30% of my peers agree with me on this, so I’m going to put my hand up and argue my position.”
Think of a clicker question as a way to frame, motivate, and enhance a rich class discussion and as a way to create a “time for telling” in which students are eager to absorb exemplary teaching. I would argue that when used in these ways, clickers do indeed improve student learning.