Course Feedback (Part 2)

Back in May, I blogged about your responses to the end-of-semester feedback survey. At the end of that post, I promised a second post addressing a couple of themes in those responses: a perceived over-emphasis on data visualization and frustration with the multiple-choice exam questions. I’d like to address those themes in this post.

(Aside: The word cloud to the right was generated using the text of your comments on the official VU course evaluations. The larger the font size, the more frequently that word appeared in your comments. Word clouds aren’t the most sophisticated text analysis tools around, but they do provide a useful way to identify themes. This word cloud inspired by Jessica Riviere.)

Data Visualization

Here are a few of your comments about the data visualization component of this course:

  • “While the presentation and display of data in a meaningful and intelligent way is an important topic, I feel that perhaps an excessive amount of time has been spent on these sorts of things when other topics might be much more relevant particularly in the context of an engineering math class. I would say that sacrificing some of the (I would say more facetious, frankly) infographic material for some additional discussion of actual data analysis and statistical inference techniques used in engineering practice.”
  • “Did not teach statistics, rather he taught how to use social networking tools and rate online visualizations of statistical data (mostly for aesthetics rather than mathematics).”
  • “The course was very interesting overall, but the application project to create an infographic had very little to do with what we had done over the course of the semester (and the project itself was very poorly explained).”
  • “Being able to design an aesthetic infographic takes many hours devoted to something that doesn’t show whether we can do what’s needed for the math course. A significant portion of our grade is based on an infographic that doesn’t signify our math abilities, but instead our hours spent on designing an aesthetic construct. If we can do the stats, we can do the stats. The infographic just makes the project something to dread.”
  • “the application project of designing an infographic, something that has very little to do with the course and is an absolutely deplorable representation of applying what we learned during the semester, serving only to feed Dr. Bruff’s incredible and useless, for the purposes of this course, love for data visualization”

(Another aside: The tone of that last comment was much harsher than that of the other comments I received this semester, or any other semester. I include it here out of completeness, but its sentiment was expressed much more politely in the first comment above. I’ll remind you that your instructors do indeed see your comments, and the manner in which you write them affects how they are received. I’m much more likely to take seriously the first comment above thanks to its respectful and constructive tone.)

Regarding the content of these comments, I should mention that this spring was the fourth time I had taught Math 216 but the first time I included a unit on data visualization. I did so this time in part to make the course more interesting to me and to you (straight-up statistics can be a little dry), but mostly because as I look at how data is used in engineering and other fields, data visualizations are becoming increasingly common. We’re now able to collect and store orders of magnitude more data in a host of contexts, from astronomy to biology to physics to geoscience. Making sense of these massive data sets (“big data”) is incredibly difficult without good visualization tools, an argument made by these Georgia Tech researchers and the authors of this textbook on visualization. The National Science Foundation’s new BIGDATA initiative highlights the importance of visualization, and visualization is becoming more and more important in the world of business intelligence, too.

Since this was a one-semester course on statistics, I wasn’t able to have you build your skills to the point that you could create the kinds of sophisticated visualizations seen in those links. But I wanted to help you develop a sense of how quantitative data can be communicated visually in more or less effective ways, since I think that sense will serve you well as you consume and perhaps create data visualizations in your future engineering work. As a particular kind of data visualization, infographics are simple enough that you could learn to build them by the end of the course but complex enough to give you the chance to hone your visual thinking skills.  Focusing on infographics during the course provided a practical way for me to approach this learning objective.

For your final project, you were asked to do every bit of statistical analysis that I asked students in previous offerings of this course to do. Where they had to write five-page papers that effectively communicated their results, you were asked to create infographics. They were graded on the quality of their communication, as were you. The aesthetic appeal of your infographics mattered, but, as you can see in the rubric for the project, aesthetics only contributed about 8% of your project grades. The other 42% or so of your grade that derived from the effectiveness of your communication had nothing to do with how your infographic “looked” and everything to do with decisions you made to communicate quantitative and statistical data in accurate and meaningful ways.

I understand that the connections between the more computational parts of the course content and the use of visualization tools (such as infographics) to communicate the results of those computations was sometimes hard to see. And it seems I didn’t do enough early on to justify the inclusion of data visualization in the course. Those are lessons I’ll take with me in future offerings of this course, particularly in how I describe and support future infographics projects. And I don’t offer the above explanations as a justification for the visualization component of the course as much as I do to help you understand even at this late date while that component was important.

Multiple-Choice Test Questions

Here are a few comments about the (dreaded) multiple-choice questions on my exams:

  • “Having multiple choice questions on exams, especially ones worth as high 6 points apiece, is sort of annoying. I believe very strongly in partial credit and showing a process. I don’t feel that multiple choice or true/false questions should be weighted that high.”
  • “He teaches well in class, the homework is fair yet challenging, but the tests are absurd. I never feel like he is testing me on my knowledge with the multiple choice section. He is testing me on my ability to solve riddles. There are some questions with a 25% success rate because of his phrasing, and no partial credit. Great teacher, awful test writer.”
  • “Did not enjoy the ‘trick’ multiple choice questions. It is never an instructor’s goal to trick their students.”
  • “I absolutely hated having multiple choice worth 6 points. You could have a perfect exam, get everything perfect, and trip up on 3 of those and get a C on your exam. I don’t feel like that C correctly represents your understanding of the material considering you did perfectly on the rest of the exam”
  • “Honestly, the multiple choice questions were poorly weighted on the exams. Feel like they punished me despite doing well on the short answer.”

So, tell me what you really think! There are a few concerns about the multiple-choice questions raised here. One is that they were “trick” questions. I’ll admit that a couple of them functioned as “trick” questions, although that was never my intention. My goal with the multiple-choice questions was to assess your understanding of important statistical concepts, like the meaning of a p-value or the idea behind conditional probability. In most cases, each of those concepts has one or more associated misconceptions, and these informed the design of the multiple-choice questions. The right answer corresponded to the correct conception, and wrong answers corresponded to misconceptions. Most of the time, I feel I did a good job of splitting these conceptions and misconceptions into separate answers, so that the right answer was verifiably correct and the wrong answers were most definitely wrong. For a couple of questions, I didn’t do this as well, and these were the questions that (I think) were seen as trick questions.

I want my tests to be fair and accurate assessments of your understanding. Those “trick” questions didn’t live up to my own standards, which poses a problem when it comes to your course grades. In one case, I gave credit after the test for a second answer choice, in an effort to treat your course grades fairly. For both midterm exams, I allowed you to submit corrections for points back, softening the blow of any “trick” questions. I understand that it’s still frustrating to get a question wrong that you felt was not clearly worded, but I hope you’ll acknowledge that such questions had very little effect on your final course grades.

Another important point about the multiple-choice questions is that they assessed different learning objectives than the free-response questions. Where the multiple-choice questions focused on your conceptual understanding, the free-response questions measured your computational skills. Different learning objectives require different kinds of assessments. It wouldn’t have been helpful to test your computational skills with multiple-choice questions, but free-response questions work well for that, since they allow me to see (and award partial credit to) your process, not just your final answer. Conversely, free-response questions aren’t entirely appropriate for assessing conceptual understanding, since asking you to express that understanding in words would have required too much subjectivity on my part in the awarding of partial credit. Multiple-choice questions, particularly ones that demonstrate the conception/misconception split I mentioned above, are more objective measures of your conceptual understanding.

If you agree with all that, you still might not agree that the multiple-choice questions should have counted for 36% of your midterm grades. Here, it’s best not to think of the question format contributing that much to your grades. Rather, think of the balance of learning objectives. Just over a third of your grade was determined by your understanding of course concepts, and almost two-thirds of your grade depended largely on your computational and problem-solving skills. I assert that’s a reasonable balance. You may be used to other courses in which your computational and problem-solving skills generated most or all of your grades, but I would argue that courses that do not value conceptual understanding do you a disservice. Without conceptual understanding, all those computational skills are just recipes to follow. You’ll have difficulty remembering how to use those recipes or adapting them to messy, real-world problems if you don’t have a good conceptual foundation. That’s why I emphasize conceptual understanding to the extent that I do.

As I said above with data visualizations, I hope these explanations help you understand why I made some of my teaching decisions this spring. I made those decisions in deliberative, intentional ways for the most part, and my intentions were to provide you with a quality education. I know I still have some room to grow in my question-writing skills, and I’ll strive to write better test questions in the future.

Thanks for your comments, and best of luck with your courses (or jobs or job searches) this fall.

Leave a Reply

Your email address will not be published. Required fields are marked *