Pre-Class Reading Assignments in Statistics
Back in 2006, I began a scholarship of teaching and learning project exploring what my students learned from reading their textbooks before class. Below you’ll find a report on this project I prepared in late 2007. I have a peer-reviewed chapter on this project coming out in a book in 2012 or 2013, but the informal report below gives a good sense of the project and its results.
Two primary questions are investigated in this project: What are students in a probability and statistics course for engineering majors likely to learn by reading their textbook before class? And what kinds of pre-class reading quiz questions, to be answered by students online the night before class, are likely to help such students learn more from reading their textbook? Answers to these questions will be of interest to instructors in similar situations who want their students to learn more from reading their textbooks, who want to “free” class time for more active teaching methods, and who want to be more responsive to the learning needs of their students.
I am investigating these questions by analyzing student responses to these pre-class reading quizzes, which consist of questions about notation, conceptual questions, computational questions, and “muddiest point” questions asking students to name a question they have about the reading. Student responses to an end-of-semester survey about their textbook reading habits, as well as student responses to a few in-class quizzes designed to assess the effectiveness of the pre-class reading quizzes, are also being used to help answer these questions. Initial results indicate that asking computational questions on pre-class reading quizzes helps students learn both computational and conceptual material from reading their textbooks, moreso than pre-class conceptual questions. This somewhat surprising result has implications for the role of conceptual and procedural learning in statistics courses.
Context
Each spring I teach Math 216: Probability and Statistics for Engineers. The course functions as a conceptual and procedural introduction to a selection of ideas and techniques in probability and statistics useful to engineers. Almost all of the students that take the course are engineering majors, most of them in their junior year. I am the instructor of record for the course and have control over the content and course requirements. There is a lot of material that could be in such a course, and I have tried to make informed decisions about the content most relevant to future engineers.
As mentioned above, most of the students taking the course are undergraduate engineering majors. Most are juniors, although I always have a few sophomores and seniors. Most are either civil or electrical engineering majors, although most of the other engineering majors are represented in the class. The students are taking the course as a requirement. Many of them have chosen this course among several options to fulfill their mathematics elective requirement, so they have at least some interest in the course content, but for many of my students, they have no particular interest in statistics. About a third of them have taken some course in probability or statistics before, but few of them remember much from those experiences.
There are a variety of statistical techniques that my students might need to use in the future as engineers. One major difficulty in applying most of these techniques successfully is that depending on the context of a particular statistics problem, a variety of assumptions can be made that affect the choice and use of technique. Without some conceptual understanding of the ideas that power these statistical techniques, it’s difficult to determine how to navigate these assumptions and apply the techniques successfully. To put it another way, if you use some statistical technique, but treat it as a “black box,” not understanding how or why it works, then you may have success in using it to solve textbook-like problems, but you won’t have the understanding necessary to adapt it to non-standard situations–which are very common in the real world! Thus, my major goal for the course is to help my students understand the concepts behind a few of the more useful statistical techniques to enable them to apply them appropriately in their later careers.
The “enduring understanding” I want my students to have is that many human-made and naturally occurring processes produce variable results and that this variability can be quantified. Doing so usually doesn’t allow one to say that the result of some such process will necessarily be X, but it does allow one to say that the result is likely to be X–and to quantify that likelihood. This kind of quantification provides useful information for making decisions about these processes. Thus, the point of the course is to develop meaningful ways to quantify variability.
Questions
While it is common to expect students to read their textbooks or other resources before class in the humanities and social sciences, it is not as common to have students “do the reading” before class in math and science classes. Inspired by the work of Harvard physics professor Eric Mazur, I regularly ask students in my mathematics courses to read their textbooks before coming to class. One reason for doing so is because I often find that students understand mathematics material more thoroughly with repeated exposure to it–once before class, once in class, and again after class in the homework. The other is that if students are able to understand certain aspects of the material just by reading their textbook, then I can spend more class time on more active learning activities, such as “peer instruction” facilitated by classroom response systems (“clickers”). Mazur speaks of the difference between the transfer of information and the assimilation of information. Transfer can happen outside of class by having students read their textbooks, leaving more time in class for the more difficult task of assimilation.
This approach to class time and textbooks leads to my primary questions of inquiry. The first question is, What are my students able to learn by reading their textbooks before class? I try to answer this question on a day-to-day basis during the course by analyzing student responses to online, pre-class reading quizzes they complete before class each day. In these quizzes, I ask a few open-ended questions about the reading, and I also ask students to identify any difficulties they encountered in the reading. Analyzing their responses before class (“just-in-time”) provides me with useful information about my students’ thinking that often informs my lesson plans. However, by investigating this question of inquiry more systematically, I hope to make even more informed assumptions about how to spend my class time.
My second question is, What kinds of pre-class reading quiz questions help my students learn more from their textbook? Research indicates that asking students questions about their readings can help them process those readings. Are there particular kinds of questions that increase the effectiveness of this process? Answers to this question will help me design the pre-class reading quizzes I have my students complete.
These questions are likely to be of interest to instructors, particularly those in mathematics and the natural sciences, who want their students to learn more from reading their textbooks, who want to “free” class time for more active teaching methods, and who want to be more responsive to the learning needs of their students. Although instructors in other disciplines may not be interested in the extent to which students are able to learn about particular statistics topics from their textbooks, I think the methods of investigation will be of interest to others exploring these kinds of issues.
Hypotheses
In response to the first question, based on anecdotal evidence, I conjectured that students are most able to master computational procedures (e.g. “Find the probability that an observation of a normal random variable is more than one standard deviation from the mean.”) and have more difficulty with concepts (e.g. “If the standard deviation of a normal distribution is doubled, how is the graph of the distribution changed?”). This is consistent with Mazur’s findings that the ability to solve computational problems in physics does not imply the ability to solve conceptual ones.
In response to the second question, based on anecdotal evidence, I conjectured that asking student conceptual or “big picture” questions on pre-class reading quizzes helps them learn more from their textbook than asking them computational or detail-oriented questions. Although students are often not able to answer tough conceptual questions on the basis of their pre-class reading, I conjectured that having students take a “first pass” though the content via their textbook means that students are better prepared to master concepts during a “second pass” in class.
Initial Explorations
When I taught Math 216 in the spring 2006 semester, I was not intentional about gathering evidence that would enable me to answer my inquiry questions. However, I had my students complete online, pre-class reading quizzes once or twice a week throughout the semester. Each quiz consisted of three or four open-ended questions about the reading, the last of which was always a “muddiest point” question that asked students to identify difficulties they had with the reading. At the end of the semester, I analyzed this rich source of data about my students’ experiences with my reading assignments, hoping that doing so would shed some light on my questions of inquiry.
First, I categorized all of my pre-class reading quiz questions according to the cognitive process dimension of Bloom’s Taxonomy of Education Objectives. Thus, I determine whether each question asked my students to remember, understand, apply, analyze, evaluate, or create knowledge. I did so thinking that I might be able to determine if students were more able to answer questions of one type more than another type, which would help me answer my first inquiry question. However, I discovered that almost all of my questions (76 percent) were “understanding” questions, which is consistant with my conjecture that conceptual questions help students learn more from the reading. Only 15 percent were “apply” questions, leaving 8 percent of the questions in the remaining four categories. This informed me that in my next offering of the course, I would need to ask a greater variety of pre-class reading quiz questions in order to continue my investigations effectively.
Second, I analyzed the difficulties my students reported having with the reading via the “muddiest point” questions on each pre-class reading quiz. I categorized their difficulties using the knowledge dimension of Bloom’s taxonomy, which meant counting the difficulties my students reported with factual knowledge, conceptual knowledge, procedural knowledge, and meta-cognitive knowledge. I counted these difficulties for each textbook section, giving me a profile of sorts of the kinds of difficulties students have with various topics. As the chart below shows, some topics (e.g. probability) were more conceptually difficult for students, while others (e.g. hypothesis tests) were difficult both conceptually and procedurally.
One interesting finding was that for some topics (e.g. normal distributions) students asked a lot of factual questions about the reading. This meant, in most cases, that they did not understand the notation used in the reading. I was a little surprised at this, since students with notational questions had the option of looking up notational meanings in the textbook as they were doing the readings, but nonetheless, it was clear that students struggled with notational issues, particularly with certain topics.
Given these results from analyzing spring 2006 data, I was ready to design a more intentional investigation of my inquiry questions in the spring 2007 offering of this course.
Gathering Evidence
Given the results of my initial explorations, I modified the format of the pre-class reading quizzes I required my students to complete in the spring 2007 offering of Math 216. Each quiz consisted of four questions: a notational question, a conceptual question, a computational question, and a “muddiest point” question asking students to give one question they have about the reading. These quizzes, graded on effort not accuracy, counted for 5 percent of the students’ course grades. I planned to address my first inquiry question by analyzing student responses to these quizzes.
In order to gather evidence more directly in support of answering the second question, late in the semester I varied the format of these quizzes. On three such quizzes, I assigned students into three groups. Group A’s quiz consisted of three notational questions and one muddiest point question. Group B’s quiz consisted of three conceptual questions and one muddiest point question. Group C’s quiz consisted of three computational questions and one muddiest point question. Group assignments rotated on each quiz so that each student received a quiz of each type over the course of these three quizzes. The following are examples of each type of question.
- Notational Question: In your own words, what is the difference between the meanings of the symbols β1 and b1?
- Computational Question: Suppose you’re given the following regressor-response pairs of data: (2, 5.24), (4, 9.99), (6, 15.58), (8, 19.51), and (10, 21.16). Calculate the slope of the line of best fit to these data.
- Conceptual Question: Why is the line of best fit to a set of regressor-response data determined by minimizing the sum of the squares of the residuals instead of the sum of the residuals themselves?
All students were also quizzed on the textbook material at the beginning of class following these experimental quizzes using one question of each type (notational, conceptual, computational) different from the ones appearing on the pre-class reading quizzes. Students were asked not only to answer these in-class quiz questions, but also to rate their confidence in their answers. Extra credit was awarded for these in-class quizzes, using the following point system: 5 points for a correct answer with high confidence, 4 points for a correct answer with medium confidence, 3 points for a correct answer with low confidence, 2 points for an incorrect answer with low confidence, 1 point for an incorrect answer with medium confidence, and 0 points for an incorrect answer with high confidence. (This scoring system is the one used by Dennis Jacobs, University of Notre Dame chemistry professor, on his in-class quizzes. The overall approach to this experiment is based on a similar experiment conducted by Mike Axtell and William Turner of Wabash College.)
I also administered an end-of-semester survey to the students asking them about their experiences reading their textbook and preparing for class.
Findings
In order to investigate what students are able to learn by reading their textbooks (my first inquiry question), I plan to analyze my students’ responses to the “muddiest point” questions on their pre-class reading quizzes. This will enable me to answer my inquiry question in the negative–determining what students are not able to learn by reading their textbook. However, I have not completed this analysis. See the “Looking Ahead” section below for more on this analysis.
However, I have made progress in answering my second inquiry question. See the chart above for the results of the experimental in-class quizzes designed to determine which types of pre-class reading quizzes are most effective at promoting understanding. As seen in the chart, students performed similarly on the in-class reading quizzes regardless of the kinds of questions asked of them on pre-class reading quizzes. Not surprisingly, students who were asked computational questions before class performed better on computational questions during class the next day. What was surprising, however, was the finding that pre-class computational questions also did a better job of preparing students to answer in-class conceptual questions.
When I shared these results with my students on the last day of class, I asked them why pre-class computational questions might be more effective than pre-class conceptual questions. Following are the reasons they offered.
- Solving computational problems gives students experience with examples of concepts, which helps with an inductive approach to learning, engineering students’ preferred approach to learning.
- It is easier to provide poorly thought out answers to conceptual problems. You cannot “fake” your way through a computational problem. Thus, computational questions challenge students to take the reading more seriously.
- The textbook’s explanations of concepts were inferior to its explanations of procedures. Thus, students who focused on computational questions before class were able to get more out of the reading.
It is also worth noting that pre-class conceptual questions excelled in one area. Those questions best prepared students to answer in-class notational questions, even more so than pre-class notational questions did.
Results of the end-of-semester student survey on the pre-class reading assignments provide more context for these findings.
- Asking engineering students at my institution to read their textbook before class is highly unusual. Only 6 percent of students reported using their textbooks before class in their science and engineering courses.
- Survey data indicate that pre-class reading quizzes should be focused on computational / procedural knowledge. Students were asked to rate the helpfulness of various types of pre-class reading quiz questions on a scale of 1 (not at all helpful) to 4 (very helpful). The average rating for computational questions was 2.86, compared to 2.66 for conceptual questions and 2.52 for notational questions. More persuasive were the students’ responses to an open-ended question asking students what they were able to learn by reading their textbook before class. About 24 percent of students indicated they were able to learn procedural / computational knowledge in this way, compared with 15 percent for factual knowledge and 15 percent for conceptual knowledge. These data are consistent with the experimental findings described above.
- Some students found the pre-class reading assignments helped them get more out of class. Students were asked to rate a series of statements from 1 (strongly disagree) to 5 (strongly agree). The average rating for the statement “Encountering the same material repeatedly (in the pre-class reading assignments, during class, after class in problem sets) helped me learn more than if I had encountered the material only once or twice” was a relatively high 3.73.
- Some students tend to skim the textbook looking for answers to the pre-class reading questions, given their 3.52 average rating of this statement.
- Students have mixed opinions on the usefulness of these pre-class reading assignments. Students rated the statement “The pre-class reading assignments should be part of future offerings of this course” at 3.18, which indicates only a slight preference for continuing to include these assignments. Comments on student course evaluations for this course indicate that some students perceived the pre-class reading assignments as too time-consuming. This perception is likely the reason some students disagreed with this statement.
Looking Ahead
The next step in my project is to develop one or more coding schemes for analyzing student responses to the “muddiest point” questions on my pre-class reading quizzes. When analyzing the spring 2006 responses, I used the knowledge dimension of Bloom’s Taxonomy (factual, conceptual, procedural, and meta-cognitive knowledge). That taxonomy will likely continue to be useful in investigating my first question of inquiry, but I can imagine other useful taxonomies.
For instance, I plan to develop a method for assessing how much insight into a student’s thought processes a particular response provides. Responses that provide me with information about student learning can help me address my students’ questions more readily. Also, such a taxonomy might provide me with a reasonable way to grade my students’ responses to “muddiest point” questions, thereby encouraging my students to provide me with more insight into their learning difficulties.
I also plan to develop a coding scheme that would help me detect signs of “deep learning” in my students’ responses. I want to design pre-class reading assignments that encourage deeper learning, so such a coding scheme would give me a tool for assessing the effectiveness of changes to my pre-class reading assignments. This would help me investigate my second question of inquiry.
At the 2007 Joint Mathematics Meetings, I met several colleagues in mathematics who are interested in questions similar to mine, particularly the analysis of student responses to “muddiest point” questions on pre-class reading quizzes. I hope to collaborate with some of these colleagues as I develop these taxonomies.
Another next step is to parse these data by various student characteristics. For instance, many of my students in the spring of 2007 were either civil engineering majors or electrical engineering majors. It might be that students in a particular major respond differently to these pre-class reading assignments. Class year (sophomore, junior, senior) and gender might also be important variables.
Finally, I will be teaching this course again in the spring of 2008. As part of my planning process, I will reflect on the results of this iteration of my project in order to (a) identify ways these results can help me improve my teaching practice and (b) design another iteration of this project to continue my investigations.