Back in the day, this blog was nothing but posts about teaching with clickers. (I had just written a book on the topic.) Even now, after blogging about other topics for years, posts about clickers outnumber other posts on this blog 242 to 213. I don’t blog about classroom response systems that much anymore, but they are still an incredible useful teaching tool and, as long as they’ve been around, they’re still new to many faculty.
In that spirit, last month I was invited to speak to two departments in our School of Medicine, both interested in exploring clickers in their resident education programs. In each program, residents spend an hour each weekday in the classroom taught by rotating faculty from throughout the department. Also in each program, there are faculty experimenting with classroom response systems as engagement and assessment tools. I was brought in to help faculty in these programs think about ways clickers might enhance the “didactics” their residents experience each day. I was happy to dig out my last presentation on clickers (from over a year ago) and tune it up for these medical educators.
I started the session as I usually start talk on clickers, by putting clickers in the hands of faculty and having them play the role of students in one of my probability and statistics classes. Many faculty, particularly ones my age and older, didn’t use clickers when they were in school. (The technology wasn’t really reliable and easy to use until 2005.) I want them to have the student experience with clickers first, before turning things around and thinking about the faculty experience. Plus, having faculty participate in a “classic” peer instruction exercise very quickly demonstrates the point that peer-to-peer learning can be engaging and productive.
After talking through the peer instruction process, saying a bit about agile teaching (using a new visual metaphor), and relating the story of my all-time favorite student tweet about clickers, I transitioned to a discussion of the kinds of questions that might work well with clickers in medical resident education. While preparing for the workshops, I couldn’t find many studies or reports on the use of clickers in that setting, so I extrapolated from my knowledge of clickers in undergraduate science courses, focusing on conceptual understanding questions, application and analysis questions, and critical thinking questions. See the slides below for examples of each type.
Some of the faculty who already use clickers with residents were kind enough to share some sample questions with me. It was clear from their slide decks that they focus on higher-order thinking skills, particularly ones used in the medical diagnostic process. I saw questions from various stages of the diagnostic (and treatment) process. In which quadrant (A, B, C, D) of the given image is the abnormality? Given this information about the patient, which scan or test would you order? What is your diagnosis? What is your treatment plan? See slides 19, 24, 25, and 26 for concrete examples.
I love these questions because they disrupt students’ notions about multiple-choice questions. Such questions are supposed to have single correct answers, right? On a test, that’s true. In class, however, clicker questions need not have single correct answers. They can reflect the uncertainty inherent in tasks like diagnosing patients. Physicians have to collect and weigh evidence, then make the best decision they can given the available information. Sometimes that decision is clear, but other times the lack of complete information on a system means that there is more than one defensible response. Physicians-in-training need to develop these critical thinking skills, and these “pick the one best answer” clicker questions can help them do so. Assuming, of course, that some time is spent discussing the reasoning behind the responses. (For more examples of “one-best-answer” clicker questions, see these blog posts.)
One more note about these two presentations: During the section on application and analysis questions, I defined the terms formative and summative assessment. Formative assessment is the assessment of student learning that’s done during the learning process, yielding information about what students are learning, what they’re not learning, and how they’re learning that’s useful to both instructors (so that they can be more responsive to student learning needs) and students (so that they receive the feedback on their learning that’s necessary for them to gain expertise in an area). Summative assessment, in contrast, is the assessment that’s done at the end of the learning process, for evaluative (thumbs up, thumbs down) purposes.
On two occasions in recent weeks (not at the these workshops), I’ve learned that these are terms that aren’t known to all faculty, even faculty who are generally savvy about teaching. The distinction between the two kinds of assessment is extremely helpful to faculty, I think, and it’s one I’ll be pointing to more explicitly in future workshops and conversations.