Last night I led a workshop titled “Leveraging Diversity: The Wisdom of Crowds in University Teaching.” My goal in the workshop was to explore ways to use crowdsourcing in the college classroom to leverage the various kinds of diversity (cultural, cognitive, affective) we see in our students. There are many great resources out there on being respectful of diversity in the classroom, that is, on inclusive teaching, but fewer resources on taking advantage of student diversity to create more effective learning environment. Given what I’ve been reading lately about crowdsourcing, I wanted to see if the “wisdom of crowds” idea might provide some helpful tools.
See my blog post on the Vanderbilt Center for Teaching blog for the Prezi I used as well as details on the examples of crowdsourcing activities I mentioned in the workshop. Here on my personal blog, I thought I would pull back the curtain a bit and talk about how I facilitated this session. There was at least some interest on Twitter in hearing more about the structure of the session beyond what’s visible in the Prezi!
We started the workshop with brief introductions. I then framed the topic a bit, noting that my use of the term “diversity” would include not only cultural or identity diversity, but cognitive and affective diversity, as well. I then used a couple of quotes from James Surowiecki (The Wisdom of Crowds) and Scott Page (The Difference) to make the argument that more diverse groups are often better problem-solvers than less diverse groups, even when the less diverse groups consist of experts in the domain. What Surowiecki and Page don’t claim is that diverse groups of students can learn more effectively than less diverse groups. That’s the argument I wanted to make in the workshop.
After that introduction, I wanted the workshop participants to experience the wisdom of crowds. I did so in two ways. First, I asked my friends on Twitter to respond to the question “What are some strategies for encouraging students with minority viewpoints to contribute to face-to-face or online class discussions?” I told the workshop participants that I would share those responses later in the session.
Then I passed around a jar of jelly beans and asked the participants to guess how many jelly beans were in the jar. Each participant submitted his or her guess on an index card independently. Then I told the participants that they had to come up with a guess as a group. While I entered their individual guesses in a spreadsheet, the 18 participants started discussing the question and trying to come to a consensus.
There were 1,048 jelly beans in the jar. The average of the individual guesses was 899, not too far off. The group’s consensus guess was 700, not quite as good. The best individual guess was 1,027, remarkably accurate. (The worst was 5,000…) After sharing these data, I had the participants debrief the experience of coming to a group consensus. What did I want them to come away from this experience with?
- The crowd is often smarter than any one individual. In this case, we had two guesses (1,027 and 1,000) that were better than the average of the individual guesses, but the average was better than all the rest of the individual guesses.
- For a problem like this, having the members of the crowd work independently is usually better than having them collaborate. During the debrief, we talked about reasons why this is the case. Some individuals didn’t feel like their guesses were very good, so they didn’t contribute to the group discussion. Others didn’t know how many in the group agreed with them, which also inhibited their contributions. And some noted that a couple of leaders emerged during the discussion, which helped move the discussion along but also perhaps biased the discussion in certain ways.
- Although getting the right answer might benefit from independent work, learning from each other does not. Some participants mentioned how they reconsidered their personal jelly-bean-counting strategies after hearing in the discussion how others had approached the problem. This particular question had a definite answer, but since there were multiple ways to approach the question, it lent itself to productive discussion.
After this bit of experiential learning, I shared some concrete examples of crowdsourcing, some from educational settings, some from other settings. See my CFT blog post for the details.
The tool that probably received the most interest was the prediction market. We discussed a few great types of questions for use with prediction markets in the classroom (questions related to current events, questions related to the outcomes of experiments or design problems, questions asking students to predict the average grade on an upcoming exam), and I’m now eager to see how I might use prediction markets in my cryptography seminar next fall. What I don’t know at this point is what kind of Web services are available to implement prediction markets. If you know of any free and/or open-source prediction market platforms, please let me know!
After showing participants several different examples of crowdsourcing activities, I then asked them to do a little crowdsourcing themselves. Instead of sharing with them what I saw as the key principles for turning a crowdsourcing activity into a learning activity, I asked them to identify those principles. I gave each participant a couple of 4″ x 8″ Post-It notes and asked each to write down at least two characteristics of a crowdsourcing activity that would like make it an effective learning activity.
Each participant’s Post-It notes were one of three different colors, and after the participants had a chance to write down their ideas, I asked them to assemble next to three different whiteboards by color. Each color group was then instructed to post their ideas on their whiteboard and cluster them like with like. Each group also had to label each of their clusters with a word or short phrase that captured the ideas in that cluster. Once the groups had done so, I had each group share out. I had planned to have the first group share out, then ask the second group to integrate their clusters with the first group’s clusters, then do the same for the third group. I ran out of time for that, so we just did some quick sharing with the large group.
I have to say that I was impressed with the characteristics that the groups came up with in such a short amount of time. I’ve been thinking about this key principles question for weeks now, but the workshop participants managed to abstract most of the principles I had identified in about ten minutes. That nicely reinforced my point about the wisdom of crowds! Here are the principles the groups identified:
- Sharing & Peer Review – The activity needs a mechanism by which students can share their (diverse) perspectives and respond to each other’s perspectives. From the instructor’s point of view, these mechanisms also make it easier to observe and monitor student learning.
- Sharing-Friendly Climate – Not only must there be a mechanism, but there must be a climate in which students feel comfortable expressing themselves and motivated to do so, even when they feel they are in the minority. How to create such an environment? Well, that was the question I had put to my Twitter friends at the start of the session! You can see their responses at the end of this blog post.
- Student-Centered Process – The activity should be constructed around helping students surface and build on their prior knowledge and experiences. The learning happens as students engage in this process, not as they “sit and get” information from their instructor.
I noted that Surowiecki’s identifies a few key components of effective crowdsourcing activities: participants must be able to generate ideas independently (something we saw earlier in the jelly bean experiment), there must be some sort of decentralization so that participants can be active agents in the process, and there must be some kind of aggregation mechanism so that the group decision can be determined. I think the idea of independence is connected to the student-centered idea, since each student needs to tap into his or her own prior knowledge and experience when starting the activity. Decentralization is about putting power in the hands of students and thereby giving them some ownership in the process, which helps motivate them to share. And a good aggregation mechanism lets participants share and review each other’s work, so there’s a fair amount of alignment between Surowiecki’s principles and the ones identified by the workshop participants.
One point that didn’t arise during the workshop activity was the importance of asking students the right kind of questions, those that permit multiple points of view. We discussed this briefly this during our discussion of prediction markets, as I mentioned above. Even when a question is a multiple-choice question (like a prediction market question or a clicker question), there can more than one defensible answer or more than one way to arrive at a given answer. While open-ended questions by their very nature permit multiple perspectives, “close-ended” questions can, too.
Moreover, when you leverage the “wisdom of crowds” in your courses, you can pose questions to your students that are more challenging than ones that can be tackled by individual students. That is, these crowdsourcing techniques allow your students to take on more complex and more challenging and thus more interesting and realistic problems in your courses. That’s another plus for using these techniques.
I’ll also add that these crowdsourcing techniques are examples of social pedagogies in which students constitute their own “authentic audience.” These pedagogies have the potential to better motivate students to engage and learn than more traditional pedagogies in which the only audience for student work is the instructor. For example, one workshop participant asked how the discussion in a prediction market (like this discussion on the NITLE prediction markets) differs from the discussion we might ask of students in a Blackboard discussion forum. I responded by saying that the prediction market provides a game-like structure for the discussion that’s more engaging and motivating than the “busywork” context of a Blackboard discussion. And, as Alan Levine blogged this morning, motivation is a key ingredient in learning.
Looking back on the workshop, I probably packed in too much content in the middle. The activities at the beginning and end of the session worked very well, and I had a third activity planned that would move the participants from the more abstract thinking of key principles to more applied thinking about using these ideas in their own teaching. I probably could have skipped a couple of the crowdsourcing examples in my Prezi to have freed up more time for this application piece. I also wish I had done a better job early in the session surfacing the participants’ experiences with crowdsourcing. I knew going into this workshop that it was a “version 1.0” kind of thing, however. I hope to offer a “version 2.0” of this workshop at some point in the near future!
Finally, here are those responses via Twitter to my question about encouraging students with minority viewpoints to contribute to discussions…
- @MsAnastasia Anastasia Salter
I find that creating environments online that students feel ownership of creates more opportunities for diverse perspectives [+]
- @MsAnastasia Anastasia Salter
For instance, allowing students to create safe spaces within online discourse, then encouraging others to read and respond [-]
- @polarisdotca Peter Newbury
#astro101 very Euro-centric. Minority students share role of astronomical events in their cultures (esp Chinese, Persian)
- @Bio_prof Ana Maria Barral
From my own experience it is all about community- feeling safe encourages expression of discordant views.
- @fismat Fraz Ismat
First need to identify those students… I find that “random picking” sometime works.
- @fismat Fraz Ismat
Another approach is to offer examples of alternative views & ask those who agree to expand on the discussion.
- @bretbenesh Bret Benesh
“What are strategies for encouraging students…to contribute to…discussions?” Clickers? Did I get it right?