Rap Genius, Course Constitutions, and Formative Assessment: #FutureEd Week One

Slippery SlopeThis weekend I caught up on “The History and Future of (Mostly) Higher Education,” a massive open online course (MOOC) taught by Cathy Davidson of Duke University on the Coursera platform. I’m interested in the course not only for the discussions about higher education it is fostering, but also for the processes that Davidson and colleagues are using to encourage and facilitate those discussions. I’m part of a CIRTL Network team designing a MOOC on college science teaching set to run this fall, and a key part of our design is what we’re calling “MOOC-supported learning communities,” in which local groups of MOOC participants benefit from and contribute to the overall MOOC experience. Davidson is doing something similar with her MOOC, coordinating with courses, events, and other activities at several dozen colleges and universities.

While participating in the MOOC this weekend, I didn’t see any of the local-global interactions I was hoping to see, ones that might inform the design of our MOOC-supported learning communities this fall. However, I did observe a number of interesting dynamics in the MOOC itself. Since the content of the MOOC wasn’t particularly new to me (Dwayne Harapnuik covered similar ground in his portion of our 2010 POD Network Conference talk), most of the observations that follow deal with the processes at play in this MOOC.

Rap Genius for Collaborative Annotations. I did not see this coming. Okay, maybe out of the corner of my eye. I had read something Cathy Davidson wrote about Rap Genius, a site designed around crowdsourcing lyrics to rap music, but I had a hard time believing the site could be used for academic work. This, coming from the guy who used Pinterest as a social bookmarking platform in his last statistics course. Boy, was I wrong.

Davidson posted one of the week’s readings to Rap Genius and invited MOOC students to annotate it. It took me a few minutes to get the hang of the site (not intuitive: you start a comment thread on an annotation by adding a “suggestion”), but once I did, I found it to be very well suited for collaborative annotations. Any word or phrase in the text of the document can receive an annotation, and it’s easy to see who authored each annotation and comment. And I found annotating and commenting to be much more comfortable activities than editing the MOOC’s “course constitution” on the course’s wiki site.

I’m still surprised that the Coursera platform doesn’t feature useful crowdsourcing tools like Rap Genius. One of the potential strengths of a MOOC is the “crowd” it gathers, but it takes good crowdsourcing tools to leverage that strength.

Course Constitutions. It seems that in most or all of Davidson’s on-campus courses, she has her students create together a course constitution or manifesto. Through this process and document, she and her students decide what they hope to learn or accomplish in the course, how they will learn from each other, and what kinds of behaviors will be encouraged and discouraged. As I mentioned above, Davidson has invited her MOOC students to write a course constitution, using Coursera’s wiki tool. I think course constitutions of this sort are a great idea, even if they don’t seem to work as well at MOOC scale as they do in a small class.

What bothers me about Davidson’s treatment of course constitutions is that she doesn’t acknowledge that many others have engaged in this practice before and independent of her. See Chapter 7, “Preventing and Responding to Classroom Incivility,” in Linda Nilson’s book, Teaching at Its Best, or Chapter 8, “Promoting Collaborative Learning,” in Rena Palloff and Keith Pratt’s book, Building Online Learning Communities, for examples of what are more typically called class contracts. Some classroom contracts focus on civility, others on rights and responsibilities of students, others on classroom discussions. Davidson’s “community manifesto” is a participatory culture spin on a regularly used practice in university teaching, but she presents it as if she had discovered this technique.

Mysterious Pop-Up Quizzes. Most of the lecture videos in the course feature a pop-up quiz question somewhere during the video. In contrast to many courses on the Coursera platform, Davidson’s pop-up quiz questions are open-ended. Example: “What is one thing–a pattern, habit, behavior–you have had to ‘unlearn’ in your life in order to be able to learn something new?” I didn’t find these questions well motivated by the preceding video content, and, after reading Alan (@cogdog) Levine’s post pointing out that one-word answers are judged incorrect and multi-word answers are judged correct, I stopped taking the questions seriously at all. Especially after my answer of “Wait, what?” to one question was met with an enthusiastic “CORRECT!” from the system.

In theory, having MOOC students do a little reflective writing as they encounter new material through lecture is a good idea. In practice, it didn’t work for me. I kept wondering why I was being asked to reflect on the given question, and who, if anyone, would ever read my reflections.

Rookie Mistakes. On her blog and on Hybrid Pedagogy, Cathy Davidson has somewhat courageously shared her behind-the-scenes experiences designing and producing this MOOC, so I won’t give her a hard time for things like video production quality. However, it was pretty clear that the first video in the MOOC was the first video that Davidson and her team filmed. When the Vanderbilt team was producing its first MOOCs last year, the team learned a lot making these kinds of videos over the course of a few months–then went back and reshot the first few videos of each MOOC. Given how simple it is for students to leave a MOOC, you don’t want your first videos to be your worst videos. These days, we tend to start with second-week videos, then go back and shoot first-week videos once the faculty member is more comfortable being filmed.

Fact-Checking. Another lesson we learned quickly was that every little bit of content you put out in a MOOC gets scrutinized because there are so many eyes looking at your content. In #FutureEd (as Davidson’s course is known on Twitter), Alan Levine (@cogdog again) questioned the repeated assertion April 22, 1993, marked the beginning of the current “Information Age.” True, that was the date that the first Web browser, Mosaic, was released to the public, but Davidson’s statements about the date seemed to imply some other, larger event involving, among others, Al Gore. Cathy Davidson responded to Levine’s question on his blog, and Levine seems satisfied with the answer, although I don’t see that she actually addressed the date question.

Meanwhile, I was fascinated by Davidson’s story about a student rebellion at Yale University in the 1830s triggered by the introduction of chalkboards. As a math educator and fan of chalkboards, I went digging for more information, and discovered that Davidson had not somewhat mischaracterized the incident. The students didn’t object to the professors using chalkboards, they objected to being required to memorize the classic conic sections diagrams and reproduce them on their tests. Presumably, before chalkboards were introduced, professors couldn’t draw these diagrams for students during class, so students were allowed to refer to textbook diagrams, even during tests. Once professors starts using chalkboards, however, students were asked to do what their professors did–recreate the diagrams from memory. (The best part of this story: The incident is referred to as the “Conic Sections Rebellion.” Surely, there’s a math department band with that name somewhere?)

Neither of these two incidents is too troubling, but they are reminders that one’s MOOC content will be fact-checked, either before or after it’s released to the public! My CIRTL Network colleagues and I will want to build time into our production schedule to take a very close look at our content before we share it publicly.

Quizzes as Formative Assessment. Cathy Davidson is trying to push the envelope with the end-of-week, multiple-choice quizzes common in Coursera MOOCs. She mentions that research indicates that students often remember the wrong answers to quiz questions as correct, and, since she wants the #FutureEd quizzes to be learning opportunities, she’s structured them so that there are no wrong answers.

Yes, that’s right. A quiz with no wrong answers.

I took the first week’s quiz last night, and tried to consider each question thoughtfully. There were many “mark all that apply” questions, as well as a few questions with “all of the above” answer choices. I assumed, quite naturally I think, that some of the answer choices would be incorrect, so I made careful decisions on each question, trying to remember (and process) the video lectures I had watched earlier in the evening. After I submitted the quiz, I was presented with a report showing me which answers I can gotten right and which I had gotten wrong. Given the feedback I received, I was able to determine that, in fact, “all of the above” was the intended correct answer for each question in which that option appeared. And those “mark all that apply” questions? I should have marked all, in every case.

This was surprising.

I understand Davidson’s desire to prevent misinformation through her quizzes and her intent to use the standard Coursera quiz for formative, not just summative, assessment. But if students are to learn new material, they need to struggle with that material. If all the answers on a quiz are correct–and if students know that, as students in #FutureEd now do–there’s no opportunity for struggle, for testing one’s memory.

I have a little experience with multiple-choice questions intended for formative assessment. When students respond to a question and are given rapid feedback on their response (as is the case with clicker questions and with these Coursera quizzes), students are likely to learn from that feedback. Responding to the question requires them to activate prior knowledge and to practice applying that knowledge to a new situation. Getting the question wrong can be a strong motivator to listen to the feedback and make sense of it. Elizabeth Nagy-Shadman and Cynthia Desrochers include an excellent summary of the research on feedback in their 2008 study of student perceptions of clickers. Here’s an excerpt:

When questions posed using the SRT system are immediately followed by a discussion of incorrect responses, teachers are using one of the most powerful predictors of positive student outcomes. In Walberg’s (1984) meta-analysis of which educational interventions had the greatest impact on student achievement, he found that instruction that incorporated feedback and correctives had one of the largest instructional effects, equal to approximately one standard deviation above conventional class settings… Moreover, Bloom (1984) referenced Walberg’s results to explain the success of mastery learning, the method described above where students learn through formative tests, feedback, corrective procedures, and parallel formative tests as needed, and are required to reach mastery of current objectives before moving to the next set of objectives.

Nagy-Shadman and Desrochers go on to cite Chickering and Gamson’s “Seven Principles for Good Practice in Undergraduate Education” and Zull’s The Art of Changing the Brain, as well.

I’m glad that Cathy Davidson is experimenting with the teaching practices common on MOOCs, but I don’t see this style of end-of-week quizzes as useful to student learning. And it’s definitely rubbish as summative assessment.

Image: “Slippery Slope,” Andreas Levers, Flickr (CC)

Leave a Reply

Your email address will not be published. Required fields are marked *