Continuing my reports from the contributed paper session on teaching with clickers I helped coordinate at the Joint Mathematics Meetings back in January…
“Using Prediction and Classroom Voting via Clickers to Address Students’ Overreliance on the Representativeness Heuristic,” Tami Dashley, University of Texas-El Paso [Slides]
Tami Dashley is a graduate student in math education and a student of Kien Lim, one of the organizers of the contributed paper session. She shared some of her thesis research, an investigation into the connection between classroom voting with clickers and certain misconceptions students have about probability. Her work focuses on the representativeness heuristic, which she defines as “determining the likelihood for events based on how well an outcome represents some aspect of its parent population.”
Tami gave the following example: Suppose you toss a coin six times, getting a sequence of heads (H) and tails (T). Which of the following is more likely to occur: TTHHTH or HTTHHH? Someone using the representativeness heuristic would say that TTHHTH is more likely to occur since it includes an equal amount of heads and tails, just like the coin does. The other option includes more heads than tails, so it would not seem as likely to someone using the representativeness heuristic. Actually, both of those outcomes are equally likely (each occurring with probability 1/64), so the representative heuristic is a misleading one in this example.
The issue is that the representativeness heuristic is useful in some cases, but not useful in all cases. The misconception that many students have is that it’s always useful.
How to help students stop over-relying on the representativeness heuristic? Tami has been investigating the use of prediction questions, ones that ask students to predict an outcome or probability without actually computing anything. For example, students might be asked to determine which of several outcomes is most likely to occur. Since students need not be as precise when responding to prediction questions, they have some cognitive processing power freed up to focus on concepts. Clicker questions are a natural match here, since they allow students to commit to their predictions and compare their predictions to those of their peers. Then discussion of the incorrect answer choices provides an opportunity to deal with misconceptions.
Tami conducted her research in a high school setting, using three groups of students. Her “control” group received a lesson exploring the representativeness heuristic that didn’t ask the students to predict any probabilities. A second group was asked several prediction questions but didn’t use clickers to respond to the questions. The third group used clickers to respond to prediction questions during the lesson. Tami used pre- and post-tests to determined the efficacy of these three different lessons.
Tami found that her “control” group did pretty well on the post-test compared to the two experimental groups. However, most of their success came from what she called a “learned response.” In this case, many of the students picked up on the fact that “all of the above outcomes are equally likely” is often the correct answer to questions exploring the representativeness heuristic. (These are what students might call trick questions!) When Tami looked at performance on questions where “all of the above outcomes are equally likely” was, in fact, not the correct answer, the prediction-with-voting group performed better than the control and prediction-only groups.
I was very impressed with Tami’s research design and the subtlety with which she explored student misconceptions in this teaching context. I don’t believe that Tami has published this work yet, but I look forward to reading it when she does.
Image: “Heads and Tails” by Flickr user canonsnapper, Creative Commons licensed