I attended a presentation by Daniel King, a chemistry faculty member at Drexel University, at a recent conference. He’s been using clickers for several years in both large, introductory courses and small, upper-level courses, and I thought it might be interesting to share some of his approaches to doing so here on the blog.
Daniel shared several types of clicker questions he uses. He uses clicker questions to assess students’ knowledge of course prerequisites at the beginning of lessons in which those prerequisites will be used. He likes to stimulate students’ interest in topics before discussing those topics by asking clicker questions that have non-intuitive correct answers, creating “times for telling.”
Daniel also has students predict the outcome of classroom demonstrations as a way to engage them in those demonstrations. He noted that many students don’t pay attention to demonstrations until something dramatic happens; his prediction questions engage them earlier in the process. He also frequently uses the think-pair-share / peer instruction method, engaging students in small-group discussions about difficult questions.
As for grading clicker questions, Daniel prefers to grade on effort and not to penalize students for incorrect answers. This is because (a) his questions are often designed to introduce students to topics and thus aren’t likely to be answered correctly by many students and (b) he doesn’t want his students to worry about their grade when responding; he wants them to be thinking about the chemistry.
The first semester he included clicker questions in his students’ grades, he counted them toward 5% of his students’ grades as a participation grade. Students would earn these points by answering at least 75% of the clicker questions during the term. He found, however, that a number of students who ordinarily wouldn’t attend class starting coming to class just to earn these participation points. This was problematic because they were often disruptive (chatting among themselves instead of paying attention) and because they frequently responded to clicker questions without thinking about those questions, making it difficult for Daniel to interpret the results of his questions.
To alleviate these problems, the next semester, Daniel awarded 5 bonus points to the final exam scores of students who answered at least 75% of the clicker questions. This reward wasn’t sufficient to motivate students to attend class if they really didn’t want to, but it did reward the effort of those students to came to class and participated regularly.
Daniel provided some insight into his decision-making process regarding when to move on after a clicker question. He said it depends on the reasons he has for asking the question. If the question is meant to assess students knowledge of a concept or technique they’ll need to understand in order to follow the rest of class, he’ll spend time discussing the question unless 85% or more of his students answer it correctly. If only 50% of students answer a question correctly that he thinks they should have answered correctly had they spent some time studying, he’ll tell the 50% of students who missed it to hit the books and move on with his lesson.
Daniel shared several other aspects of his use of clickers, including his use of a couple of clicker questions early in the semester that most students answer incorrectly to teach students that the most popular answer is not necessarily the correct one. He’s clearly thought a lot about his teaching choices when using clickers, and he did a great job of articulating his reasons for his choices during his presentation.