Research on Classroom Respsonse Systems at the 2011 AERA Conference

Dan Robinson, an associate professor of educational psychology at the University of Texas at Austin, let me know that several of his grad students are presenting the results of their research on clickers in a symposium at the 2011 American Educational Research Association (AERA) annual meeting in New Orleans, April 8-12, 2011. I don’t think I’ll be able to make it to the AERA meeting, but I’m betting some of you reading this are planning to go. If so, I hope you’ll check Dan’s symposium. Below you’ll find abstracts for his students’ projects.

Thanks, Dan, for letting me know about this.

Evaluation of MOCA, a Mobile Ongoing Course Assessment Tool
Stephanie B. Corliss & Joel Heikes

To capitalize on the saturation of mobile phone devices owned by students, the Mobile Ongoing Course Assessment system, MOCA, was developed to provide a platform to facilitate the exchange of information from instructor to student. Using web-enabled devices such as smartphones, Blackberries, netbooks, iPod Touches, iPads, or laptop computers, MOCA allows instructors to survey students during class to assess learning and engage students in thoughtful discussions. The purpose of the evaluation was to determine whether MOCA can be used effectively in the classroom with the current campus infrastructure, and to measure student and instructor satisfaction using the tool on either a mobile device or a laptop computer. Instructor (N=3) and student (N=48) pre- and post-use surveys and classroom observations were analyzed. Results revealed that students and instructors thought the tool was easy to use, they were satisfied with their experience, and that MOCA was helpful for student learning and engagement. There were no statistical differences in comfort rating, satisfaction rating, or ease of use rating by device used to access the tool (mobile device versus laptop computer). Instructional uses of the system varied by instructor; uses included graded quizzes, checking students’ understanding, and polling students’ opinions and experiences. Lessons learned and future uses of MOCA, along with the benefits and drawbacks of the MOCA system compared to other classroom response systems will be discussed.

Student Accountability with Classroom Response Systems
Jane Vogler

We used a crossover design to examine whether the increased accountability of assigning course credit to CRS questions leads to an increase in student learning by comparing students within the same learning environment. Students enrolled in an undergraduate educational psychology course were divided into two groups that alternated using CRS in two consecutive units. All of the students, with and without CRS, attended and participated in class as normal.

During two 75-min. lecture class periods, the instructor asked between 2 and 6 multiple-choice questions, for a total of 8 questions per unit. Students with CRS had the opportunity to answer the questions and earn bonus points. Students in the non-CRS group had the same opportunity to privately consider the question and answer, but they could not receive bonus points for that unit. In the following unit, the students switched groups so that all students had an opportunity to use CRS to answer questions for points during class lectures. At the end of each unit, all students took the same exam. At the end of the two units, students completed a survey about the use of CRS in the class, and their perceived engagement in the course.

There was no difference on the unit exam score between the CRS (M=22.29, SD=14.21) and non-CRS groups (M=22.61, SD=12.03), t(49) = 0.67, p > .05. This may be due to the fact that all students had exposure to the questions and the subsequent discussions of the answers. Although there was no difference in learning outcomes, student survey results were generally positive. Eighty-eight percent of the students felt that the CRS questions helped them “prepare for the unit exams” and 94% reported that the questions helped them to “learn better in the class”. In addition to their learning, 90% of the students either agreed or strongly agreed that the CRS helped to feel more engaged during class lectures. This is consistent with previous findings in the literature (Blood & Neel, 2008; Mayer, Stull et al., 2009; Morling, McAuliffe et al., 2008).

Using Dual-Task Methodology to Measure Student Attentiveness with Classroom Response Systems
Jason Crandall

Student engagement is a critical component of effective classroom instruction and student response devices, such as clickers, are thought to increase student engagement by providing students with regular opportunities to check their comprehension or express their opinions. Claims of increased student engagement due to clicker use are often based upon student self-reports and have only a small correlation with observed learning gains or other measures of attentiveness. This study compared learning, engagement and attentiveness scores for three groups: a control group, a group that considered but did not answer lecture questions (question group), and a group that answered lecture questions using clickers (clicker group). As expected, there were no differences among the groups in pre- and post-test scores or self-reported engagement scores. However, the mean response time on an ancillary, non-lecture related task was greater for the clicker group (81.9 seconds) than it was for either the control (16.9 seconds) or question groups (17.4 seconds). Thus, students using clickers were less able to focus on ancillary tasks during a lecture than students not using clickers. This suggests that using clickers increases student attentiveness.

Comparing Pre-Lecture and Start-of-Lecture Questions’ Effects on Student Readiness and Learning with Classroom Response Systems
Sara J. Jones

These two studies took place during four consecutive units of an undergraduate educational psychology course. Students used CRS in alternating units for points toward their class participation grade. During the first unit of the semester, the class lectures did not contain any in-class multiple choice questions. Prior the second class unit, half of the class was told that they would have the opportunity to earn class participation points for correctly answering in-class questions. The control group was instructed that due to a lack of devices, they would not be answering in-class questions until the following unit. Immediately upon entering class for Unit 2, all students were given an iClicker device and asked to answer 10 pre-lecture questions as a technology training session. None of the students received points for these questions, but they were asked to answer them correctly. In the third class unit, the procedure was reversed. To make the “training session” a believable situation, the researchers introduced MOCA as the classroom response system instead of iClickers. Student grades were collected on the end of unit exams for all students.

For the second unit, the students that were previously told that they would use clickers for course credit scored higher on the unit exam than the students who were told that they would not be using clickers, F(1, 36)= 4.418, p< 0.05. However, when the groups were reversed for the third unit, there was no difference. This could be due to a crossover effect where the first group learned that reading before class helped them on their unit quizzes and continued to read even when they were not using clickers.

For the second study, MOCA was again used to assess students’ pre-reading of course materials and whether it had an impact on their learning in the course. For unit 4, students were assigned to read one chapter from a textbook before each lecture class period. Two chapters were covered in each unit of the class. Ten multiple-choice questions were available for students via MOCA to answer prior to class. Half of the students were told that they would receive participation credit for correct answers. The other students were told that they could answer the questions, but that they would not receive any credit for their answers until the following unit. In the fifth unit, the groups were reversed. Students in the group receiving points (accountability) were more likely to answer the pre-reading questions than the students not receiving points. However, there were no differences in the unit exam scores of the two groups. Thus, when students know they will be answering CRS questions for credit in class, they come to class better prepared (likely having read the chapter) than those students who do not expect to answer CRS questions for credit. However, if students simply answer questions before class, they may likely simply “look up” the answers and not actually read the assigned readings.

The Interaction of Paper vs. Electronic CRS and Individual vs. Collaborative Feedback
Camilo Guerrero

The aim of the present study was to empirically assess how combining an active, collaborative learning environment with a CRS in a postsecondary setting can influence and improve learning outcomes. To this end, the study compared instructional designs utilizing two response-formats (clickers and flashcards) and two methods for answering in-class questions (collaborative peer instruction and individual). The theoretical bases that provide the academic structure for the five instructional conditions (control, clicker-response individual, clicker-response peer instruction, flashcard-response individual, and flashcard-response peer instruction) are the generative learning theory and social constructivism.

Participants were 171 undergraduate students from a large Southwest university. The researcher used a two-way analysis of covariance (ANCOVA) with two treatments (response format and collaboration level) as the between-subjects factors; students’ posttest scores as the dependent variable; and pretest scores as the covariate. Results showed no main effects; however, there was an interaction effect between response format and method. To follow up the interaction, the researcher conducted tests of the simple effects of response format within each method condition, with the pretest as the covariate. Results showed that for students who collaborated, clickers were better than flashcards, whereas when students worked individually, there was no difference.

This study builds upon existing studies by using a stronger empirical approach with more robust controls to evaluate the effects of a variety of instructional interventions, clicker and flashcard response systems and peer instruction on learning outcomes. It shows that CRS technology might be most effective when combined with collaborative methods.

Image: “classroom,” velkr0, Flickr (CC)

Leave a Reply

Your email address will not be published. Required fields are marked *