Reference: Campt, D., & Freeman, M. (2009). Talk through the hand: Using audience response keypads to augment the facilitation of small group dialogue. The International Journal of Public Participation, 3(1), 80-107.
Summary: This article by David Campt and Matthew Freeman describes ways to use clickers to facilitate dialogue among small-to-medium-sized groups of people (6 to 40 people) with common interests but diverse perspectives. For example, the authors mention using clickers with residents of an urban neighborhood facing tough questions that involve race and class, as well as with employees from multiple levels of hierarchy within a business discussing the mission and functions of the business. The authors describe themselves as “dialogue facilitators” and their work as collaborative actions, which uses “dialogue, inquiry, and deliberation to inspire participants, build working relationships, and make decisions about collaborative actions they will take to improve their communities.” (Wilson, P. (2004). Deep democracy: The inner practice of civic engagement. Fieldnotes: A Journal of the Shambala Insitute. 3, 1-6.)
The authors describe a few different types of clicker questions they use to foster dialogue, including demographic questions exploring participants’ diverse backgrounds, experience questions asking participants “whether or how frequently they may have had specific experiences,” opinion questions about internal and external issues relevant to the community, and fact questions designed to explore differences between objective facts (such as statistics about demographics in the United States) and participant perceptions of those facts.
The authors also describe dialogue focused on collaborative action to have several phases, including introducing participants to each other and to the dialogue process, sharing participant experiences and perceptions, exploring diversity and commonalities with the goal of understanding “underlying social conditions” that produce diverse perspectives, and exploring possibilities for action. The authors describe several ways that clickers can enhance dialogue in each phase, but they focus primarily on the earlier phases.
For instance, asking demographic clicker questions during the introduction phase can help participants learn about each other more quickly, particularly around demographic characteristics that aren’t immediately visible, such as political affiliation or sexual orientation. These questions can provide “teachable moments” about group processes, such as reminding participants to be respectful of those with backgrounds different from their own, and help enhance “participants’ sense of empathy for others.”
During the introductory phase, clicker questions can also help to surface common intentions among participants. The authors note that when there are two “sides” on a contentious issue, often both sides have similar goals but different opinions about reaching those goals. Asking a clicker question that makes evident participants’ common intentions can help defuse some of the tension in the room that might otherwise arise.
Furthermore, “fact” questions can help bring important facts into the subsequent conversation, often “demonstrating that people in the group know less than they think they do about an issue of relevance,” leading to more open-minded attitudes.
The authors also discuss the use of participant experience questions (e.g. “How long has it been since the last time you can recall witnessing an act of racial discrimination?”) in the second phase of their dialogue facilitation-helping participants understand the variety of perspectives they have on the topic at hand. Asking such a question, then hearing from a few participants, then commenting on any pattern that emerges (e.g. “It seems that more of the people of color have recent stories.”) is one approach. However, having those patterns emerge through the results of a clicker question can demonstrate such patterns more quickly and prevent participants from thinking the facilitator is finding patterns that he or she wants to see in the responses. The authors also note the use of demographic comparison questions, parsing the results of an experience question according to some demographic characteristic of the participants.
Other uses are discussed, as well, including showing matches between clicker question results and national polling data for some questions, helping participants come to decisions about collaborative action steps, and providing both facilitator and participants with information about participants’ feelings about a session at the end of the session.
Finally, the authors make the point several times that clicker questions and their results serve to generate productive dialogue. They are not an end to themselves.
Comments: While I typically discuss the use of clickers in college and university settings on this blog, I wanted to share and comment on this article since the authors have a particularly nuanced and informed approach to fostering dialogue-with and without clickers-that college and university instructors reading this blog might find useful, particularly those who discuss controversial or sensitive issues in the classroom. Their writing is also informed by a research literature on fostering dialogue that would likely be unfamiliar to most academics. I’m also excited by growing use of clickers and other response systems in non-academic educational settings, such as community dialogues as described in this article, as well as church services, corporate presentations, and social science research.
I found the author’s description of types of clicker questions they use to align nicely with the types of clicker questions I group under the umbrella term “student perspective questions.” I usually think of these questions as being about student demographics, student experiences, or student opinions. I hadn’t thought about putting factual questions in this category, but it makes sense. Seeing how students (or dialogue participants) perceive objective facts serves a similar purpose as these other types of questions-helping the community better understand each other and helping the teacher / facilitator better understand the community. When used to demonstrate to students or participants that they know less than they think they do about a particular topic, these questions also serve to generate a “time for telling.”
When reading about the use of clicker questions to surface common intentions (as described above), I wondered if there’s a risk of having participants feel like such a question is rigged, that the facilitator is asking it mainly as a set-up to make the point that “we all have something in common.” If there’s a risk of that, I wonder what Campt and Freeman might do to minimize that risk. I also wonder what they might do if this kind of question backfires, showing that the participants have less in common than they think they do.
A few other questions occurred to me as I was reading the paper’s section on directions for future research on the use of response systems in dialogue facilitation. The authors ask, “Are there people whose verbal participation in dialogues increases as keypad use increases?” I would also ask, might a participant who finds out he or she is in the distinct minority on a particular issue be less likely to participate in discussion? The authors also ask if the availability of providing anonymous feedback might have some distorting effect on reported opinions. I wondered that, as well, thinking about how contentious or smart aleck participants might abuse the ability to respond anonymously.
I also wonder if there might be a role for pair or small-group discussion prior to voting in these settings. Peer instruction is a common application of clickers in educational settings-might something similar play a role in dialogue facilitation? Also, what about asking the same questions at the start and end of a dialogue session as a way to show participants how they’ve changed their perspectives over the course of the dialogue? Might that be useful in some contexts?