Novice or Veteran? (Part Two)
Back in October, I ran a session on supporting faculty using clickers at the POD Network conference in Houston. Shortly thereafter, I posted here the results of some of the clicker questions I asked at the session asking participants to identify particular uses of clickers are more likely to be implemented by an instructor new to using clickers (“novices”) or more likely only to be implemented by an instructor with some experience teaching with clickers (“veterans”). In my earlier post, I promised to share some more results of these questions, and here they are…
The majority of session participants felt that using clickers to generate classwide discussion is something that only veteran clicker users do. My guess is that in courses where class discussion is already more common (small courses, humanities courses, and so on), instructors are more likely to use clickers to enhance those discussions. In courses where class discussion is less common (large lecture courses, for instance), instructors might be less likely to use clickers for this purpose.
However, I’ve heard a number of faculty who use clickers (particularly ones in the natural sciences) advise other faculty that clicker questions work best when they motivate students to focus on the reasons for and against the various answer choices. I realize that running a class discussion in a large class can be challenging, but I think in most cases, students benefit from engaging with a clicker question as a class before hearing the instructor’s take on the question. The good news is that clicker questions give students the opportunity and a motivation to think through a particular question prior to a classwide discussion, which means they’re more likely to be willing to contribute to that discussion.
Here are some results that didn’t surprise me. Most participants felt that creating “times for telling” by asking clicker questions most students answer incorrectly is an approach to using clickers few of those new to teaching with clickers use. Why might this be the case? One reason is that many instructors new to using clickers pose questions designed to see if students understood a point recently made in class. The hope with these questions is that most students will answer correctly. These questions are designed primarily for assessment, but questions meant to be answered incorrectly are designed more for engaging students in the learning process.
When students are confronted with results that demonstrate to them their lack of understanding of a particular topic, they are more motivated to resolve whatever misconception they have about that topic in order to understand the right answer. Not only is this approach to teaching somewhat sophisticated, since it requires at least some sense of the cognitive and affective components of learning, but implementing it also requires having a good sense of what misconceptions students have and designing questions that surface those misconceptions. All this takes some teaching experience and, in most cases, experience crafting effective clicker questions.
And here are some results that did surprise me. I talk a lot about using clickers to enable more agile teaching in my presentations on teaching with clickers. To see that so many of my POD Network colleagues view this use of clickers as something only those experienced with clickers implement was a bit of a shock to me. Instructors who aren’t using clickers to make their teaching more responsive to student learning needs are missing out on one of the key benefits of clickers. In fact, there’s some evidence (thanks, Ian Beatty, for sharing that!) that gathering information about student learning and NOT responding to it is worse than not gathering that information at all.
Why might instructors new to using clickers not be comfortable altering their lesson plans on-the-fly in response to the results of clicker questions? Small deviations from a lesson plan, like explaining a point again when a clicker question indicates students didn’t get it the first time, aren’t too intimidating, but going very far “off script” is a little scary, I think. Many instructors like to know what to expect when they walk into class. That’s why we have lesson plans. Changing those plans midstream is risky: What if your on-the-fly decisions about where to take the class aren’t good ones? What if you find out students don’t understand what you’ve just explained and you can’t think of a good alternate explanation? What if your entire lesson plan gets derailed by unexpected clicker question results?
Scary questions! However, I don’t think agile teaching needs to be quite so daunting. The basic version of agile teaching goes as follows: If most students answer a clicker question correctly, you can fairly quickly move on to the next item on your lesson plan, but if most answer it incorrectly, you should spend more time on the topic before moving on. You can even plan ahead for this kind of thing; just build into your lesson plan an alternate explanation or activity to use after each clicker question. The more nuanced version of agile teaching isn’t that much more complicated: Take a look at which wrong answers are most popular, and drill down on them, having students share reasons for those answers. Again, you can often plan ahead to handle this. Try to predict which wrong answers will be most popular and plan a response to each.
So what do you think? What kinds of conditions might lead an instructor new to using clickers to use them to facilitate classwide discussion, generate “times for telling,” or practice agile teaching?