Article: Higdon et al. (2011) – Word Clouds for Free-Response Questions (#EDUSprint)

Reference: Higdon, J., Reyerson, K., McFadden, C., & Mummey, K. (2011). Twitter, Wordle, and ChimeIn as student response pedagogies. EDUCAUSE Quarterly 34(1).

Summary: This article describes the development, use, and assessment of two systems at the University of Minnesota for asking students to respond to open-ended questions during class and aggregating those responses on-the-fly using word clouds.

The first system involved the creation of ten Twitter accounts for a course on medieval cities of Europe. Login information for all ten accounts was shared with students in the course, and the students were asked to use these accounts to tweets their thoughts while watching films during class. The instructor provided a few basic prompts to help guide these tweets. A student could use any mobile device he or she had handy to participate–cell phone, smart phone, laptop, and so on. Students on smart phones and laptops could see and respond to their peers’ tweets during the film. Then after the film, the instructor ran the tweets through Wordle to create a word cloud used for guiding the post-film class discussion.

The authors conducted a fairly extensive assessment of this classroom response system. Attendance numbers were up over previous semesters, and classroom observers indicated that students seemed attentive throughout the films. (Poor attendance and lack of attention were the two main reasons behind the use of this system.) The average number of tweets per student ranged from 2.7 to 3.4 during the class sessions in which the system was used. Although many of those tweets occurred during the first 20 minutes of each film, there was a steady stream of tweets throughout the each film. A content analysis of the tweets indicated that 64% of the tweets were on some level insightful and an additional 23% were at least relevant if not particularly insightful.

Student response to this classroom response system was mixed, however. Relatively few of the students surveyed thought that this use of Twitter was helpful to their learning, and many seemed to find it distracting. The authors raise a few possible reasons for the conflict between these perceptions and the more objective data about student use of the system, which was largely positive. They note that students might not not have enjoyed the requirement that they participate actively during the showing of the films, that this form of participation was unfamiliar and thus uncomfortable to students, and that the Twitter backchannel did not, in fact, contribute much to their learning, in spite of the positive data about student attention and engagement.

Although this first system worked well (at least by some measures), it was a little clunky and involved third-party tools (Twitter and Wordle). And so the University of Minnesota developed a new system called ChimeIn that replicated the function of the first system in a single, easy-to-use platform that keeps all the data on university servers. ChimeIn allows instructors to pose true-false, multiple-choice, and free-responses questions during or between classes. Students can then log in and respond to those questions using a web interface or text messaging. Responses to true-false and multiple-choice questions are displayed using bar charts, and responses to open-ended questions are displayed using word clouds and simple chronological lists of responses. These two displays are integrated, so that clicking on a word in the word cloud filters the response listing to show only those responses that include the selected word.

ChimeIn is tied to the university registrar’s database, so that students enrolled in a course are automatically enrolled in the corresponding course in ChimeIn. Instructors can choose to have student responses to individual questions identified or provided anonymously. The response visualizations can be updated in real-time as responses are submitted or only shown to the class once all responses are in. And ChimeIn makes it easy to log out and log in so that, for instance, two students can easily share a mobile device during class. As of the writing of this article, ChimeIn does not, however, allow students to view their peers responses on their own devices, which means that students are not able to respond to or comment on their peers’ remarks.

ChimeIn has been rolled out across the University of Minnesota. About 100 courses have at least five questions in the system, and over 1,400 questions have been created across all courses. Seven hundred students have linked their cell phones to the system to allow for text-message responses to ChimeIn questions. The system has been used outside of the traditional classroom, too. The student union uses it on their on-site digital signage to poll students passing by, and in one pharmacy course, instructors have students use the system to post information about commercials (drug-related or otherwise) they see while watching television.

Comments: In my recent post, “Mobile Learning – Much More Than Just Content Delivery,” I identified five basic types of mobile learning. Number one on the list was the use of mobile devices as “super-clickers,” and this article from the University of Minnesota is a great example of this type of mobile learning. As I’ve said for years now, mobile devices solve half of the challenge of using free-response questions with classroom response systems: They make it easy for students to enter responses to open-ended questions. (Dedicated clicker devices are only just now solving this problem through more intuitive text-entry mechanisms.) The University of Minnesota team has made a good start on solving the other half of the challenge: They use a few sensible tools for displaying and making sense of responses to open-ended questions on the fly during class.

Developing tools for making sense of free responses (text responses or other kinds) during class is certainly an open problem. I’ve floated a few ideas here on the blog in the past: creating class tag clouds based on descriptors students give to an image or piece of text, plotting virtual push-pins on an image or a map to show where students think a particular feature is located, asking tablet-enabled students to submit drawings and diagrams that instructors can make sense of visually, and integrating student-selected quotations in a shared digital copy of a text. I’m glad to see the University of Minnesota team take the word cloud idea and run with it. The ChimeIn feature they’ve developed that integrates the word cloud with the chronological listing of responses is a great innovation, one that makes the word cloud visualization even more useful.

The ChimeIn developers are interested in developing other visualization tools, and there’s certainly room for creativity in this space. One of the examples in the article involves asking students the same question before and after a learning experience, then comparing the pre and post word clouds to see how student responses change over time. I find straight-up word clouds hard to compare. Word clouds in which word frequency controls not only the font size but the order of the words displayed are a bit easier to compare, like these word clouds from the dating site OKCupid comparing how heterosexual men describe their interests with how heterosexual women do:

There’s also the approach Drew Conway took in comparing speeches given by President Barack Obama and Sarah Palin, assigning some meaning to the horizontal axis in a word cloud visualization:

Moving away from word clouds entirely might work for this before-and-after scenario, too. I can see listing the terms used before and after in a column, with a bar graph on the left showing frequency of those terms before the learning experience and a bar graph on the right showing frequency after.

For “regular” free-response questions (ones that don’t have the pre/post aspect), Jeff Clark‘s word cluster diagram might be a useful tool:

Words that appear near each other in the text are grouped together spatially and through color. There’s also the word tree idea, which also captures more of the context in which words are used. Here’s one for Alice in Wonderland:

Font size is still used to represent frequency, but words that appear next to each other in the text are linked in the visualization, as well.

There’s also the idea of having the students identify more important student responses through the use of some kind of amplification tool. The simplest version is the “vote up / vote down” mechanism, as seen in Google Moderator and in Ideascale, as seen in this week’s EDUSprint on mobile learning from the EDUCAUSE Learning Initiative. (Thus the #EDUSprint hashtag in the title of this blog post.) When using Twitter, you could encourage students to retweet peer comments they find particularly insightful, then give more weight to those comments with the most retweets. There are problems with these amplification approaches (for instance, how do you give proper attention to the “long tail” of comments that don’t get many votes or retweets?), but they are useful for quickly making sense of responses to open-ended questions without breaking those responses into individual words.

I’ll end this post with a few quick comments about other aspects of the article:

  • I like that students were encouraged to tweet their thoughts during, not after, the film. This backchannel approach provides a useful way to keep students engaged during what would otherwise be a pretty passive learning experience.
  • I also appreciated that the authors recognized that limiting student responses to 140 characters (the limit imposed by Twitter) motivates students to be concise in their writing. See my colleague Patrick Bahls’ use of tweets-without-Twitter for more on this idea.
  • I was surprised that only 69 out of 77 students in the pilot class indicated they had a mobile device with which they could tweet. Perhaps they weren’t aware that you can tweet via SMS text-messaging, a capability that almost all students have on their cell phones, judging from national surveys.
  • My read on the negative student reaction to the Twitter experiment was in line with one of the authors’ hypothesis, that students just weren’t used to participating in this way. For students just tweeting about a film as it happens and not reading and responding to other students’ tweets, participating in the backchannel is not much different or more distracting than taking a few notes in one’s notebook during the film. My guess is that students, however, didn’t see that parallel.
  • The authors include a great “time for telling” anecdote in the paper. The instructor didn’t see the plague as the central theme of a movie about medieval London, but seeing “plague” as the biggest word in the word cloud at the end of the movie gave her the chance to “articulate clearly to the class the role that the plague played–and didn’t play–in the formation of medieval London.”
  • The authors and some of their colleagues seemed to be very concerned that student responses to open-ended questions not be available on the open Web. When they used Twitter, they made sure to make all class Twitter accounts private, and the new ChimeIn system is inherently a closed system. I understand some of the reasons for this (like the concern expressed in the paper over confidentiality in a course on gender and sexuality), but by closing the classroom response system, you lose the network or multiplier effect that Twitter provides. Why limit the class conversation to those actually enrolled in the class if there are others, particularly students, who are interested in participating in the conversation in meaningful ways?

Thanks to the University of Minnesota team for experimenting with free-response classroom response systems and for taking the lead in developing innovative visualization and analysis tools for these kinds of questions. I look forward to seeing more great ideas from the team as it continues to develop ChimeIn.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *