Course Feedback (Part 2)

Back in May, I blogged about your responses to the end-of-semester feedback survey. At the end of that post, I promised a second post addressing a couple of themes in those responses: a perceived over-emphasis on data visualization and frustration with the multiple-choice exam questions. I’d like to address those themes in this post.

(Aside: The word cloud to the right was generated using the text of your comments on the official VU course evaluations. The larger the font size, the more frequently that word appeared in your comments. Word clouds aren’t the most sophisticated text analysis tools around, but they do provide a useful way to identify themes. This word cloud inspired by Jessica Riviere.)

Data Visualization

Here are a few of your comments about the data visualization component of this course:

  • “While the presentation and display of data in a meaningful and intelligent way is an important topic, I feel that perhaps an excessive amount of time has been spent on these sorts of things when other topics might be much more relevant particularly in the context of an engineering math class. I would say that sacrificing some of the (I would say more facetious, frankly) infographic material for some additional discussion of actual data analysis and statistical inference techniques used in engineering practice.”
  • “Did not teach statistics, rather he taught how to use social networking tools and rate online visualizations of statistical data (mostly for aesthetics rather than mathematics).”
  • “The course was very interesting overall, but the application project to create an infographic had very little to do with what we had done over the course of the semester (and the project itself was very poorly explained).”
  • “Being able to design an aesthetic infographic takes many hours devoted to something that doesn’t show whether we can do what’s needed for the math course. A significant portion of our grade is based on an infographic that doesn’t signify our math abilities, but instead our hours spent on designing an aesthetic construct. If we can do the stats, we can do the stats. The infographic just makes the project something to dread.”
  • “the application project of designing an infographic, something that has very little to do with the course and is an absolutely deplorable representation of applying what we learned during the semester, serving only to feed Dr. Bruff’s incredible and useless, for the purposes of this course, love for data visualization”

(Another aside: The tone of that last comment was much harsher than that of the other comments I received this semester, or any other semester. I include it here out of completeness, but its sentiment was expressed much more politely in the first comment above. I’ll remind you that your instructors do indeed see your comments, and the manner in which you write them affects how they are received. I’m much more likely to take seriously the first comment above thanks to its respectful and constructive tone.)

Regarding the content of these comments, I should mention that this spring was the fourth time I had taught Math 216 but the first time I included a unit on data visualization. I did so this time in part to make the course more interesting to me and to you (straight-up statistics can be a little dry), but mostly because as I look at how data is used in engineering and other fields, data visualizations are becoming increasingly common. We’re now able to collect and store orders of magnitude more data in a host of contexts, from astronomy to biology to physics to geoscience. Making sense of these massive data sets (“big data”) is incredibly difficult without good visualization tools, an argument made by these Georgia Tech researchers and the authors of this textbook on visualization. The National Science Foundation’s new BIGDATA initiative highlights the importance of visualization, and visualization is becoming more and more important in the world of business intelligence, too.

Since this was a one-semester course on statistics, I wasn’t able to have you build your skills to the point that you could create the kinds of sophisticated visualizations seen in those links. But I wanted to help you develop a sense of how quantitative data can be communicated visually in more or less effective ways, since I think that sense will serve you well as you consume and perhaps create data visualizations in your future engineering work. As a particular kind of data visualization, infographics are simple enough that you could learn to build them by the end of the course but complex enough to give you the chance to hone your visual thinking skills.  Focusing on infographics during the course provided a practical way for me to approach this learning objective.

For your final project, you were asked to do every bit of statistical analysis that I asked students in previous offerings of this course to do. Where they had to write five-page papers that effectively communicated their results, you were asked to create infographics. They were graded on the quality of their communication, as were you. The aesthetic appeal of your infographics mattered, but, as you can see in the rubric for the project, aesthetics only contributed about 8% of your project grades. The other 42% or so of your grade that derived from the effectiveness of your communication had nothing to do with how your infographic “looked” and everything to do with decisions you made to communicate quantitative and statistical data in accurate and meaningful ways.

I understand that the connections between the more computational parts of the course content and the use of visualization tools (such as infographics) to communicate the results of those computations was sometimes hard to see. And it seems I didn’t do enough early on to justify the inclusion of data visualization in the course. Those are lessons I’ll take with me in future offerings of this course, particularly in how I describe and support future infographics projects. And I don’t offer the above explanations as a justification for the visualization component of the course as much as I do to help you understand even at this late date while that component was important.

Multiple-Choice Test Questions

Here are a few comments about the (dreaded) multiple-choice questions on my exams:

  • “Having multiple choice questions on exams, especially ones worth as high 6 points apiece, is sort of annoying. I believe very strongly in partial credit and showing a process. I don’t feel that multiple choice or true/false questions should be weighted that high.”
  • “He teaches well in class, the homework is fair yet challenging, but the tests are absurd. I never feel like he is testing me on my knowledge with the multiple choice section. He is testing me on my ability to solve riddles. There are some questions with a 25% success rate because of his phrasing, and no partial credit. Great teacher, awful test writer.”
  • “Did not enjoy the ‘trick’ multiple choice questions. It is never an instructor’s goal to trick their students.”
  • “I absolutely hated having multiple choice worth 6 points. You could have a perfect exam, get everything perfect, and trip up on 3 of those and get a C on your exam. I don’t feel like that C correctly represents your understanding of the material considering you did perfectly on the rest of the exam”
  • “Honestly, the multiple choice questions were poorly weighted on the exams. Feel like they punished me despite doing well on the short answer.”

So, tell me what you really think! There are a few concerns about the multiple-choice questions raised here. One is that they were “trick” questions. I’ll admit that a couple of them functioned as “trick” questions, although that was never my intention. My goal with the multiple-choice questions was to assess your understanding of important statistical concepts, like the meaning of a p-value or the idea behind conditional probability. In most cases, each of those concepts has one or more associated misconceptions, and these informed the design of the multiple-choice questions. The right answer corresponded to the correct conception, and wrong answers corresponded to misconceptions. Most of the time, I feel I did a good job of splitting these conceptions and misconceptions into separate answers, so that the right answer was verifiably correct and the wrong answers were most definitely wrong. For a couple of questions, I didn’t do this as well, and these were the questions that (I think) were seen as trick questions.

I want my tests to be fair and accurate assessments of your understanding. Those “trick” questions didn’t live up to my own standards, which poses a problem when it comes to your course grades. In one case, I gave credit after the test for a second answer choice, in an effort to treat your course grades fairly. For both midterm exams, I allowed you to submit corrections for points back, softening the blow of any “trick” questions. I understand that it’s still frustrating to get a question wrong that you felt was not clearly worded, but I hope you’ll acknowledge that such questions had very little effect on your final course grades.

Another important point about the multiple-choice questions is that they assessed different learning objectives than the free-response questions. Where the multiple-choice questions focused on your conceptual understanding, the free-response questions measured your computational skills. Different learning objectives require different kinds of assessments. It wouldn’t have been helpful to test your computational skills with multiple-choice questions, but free-response questions work well for that, since they allow me to see (and award partial credit to) your process, not just your final answer. Conversely, free-response questions aren’t entirely appropriate for assessing conceptual understanding, since asking you to express that understanding in words would have required too much subjectivity on my part in the awarding of partial credit. Multiple-choice questions, particularly ones that demonstrate the conception/misconception split I mentioned above, are more objective measures of your conceptual understanding.

If you agree with all that, you still might not agree that the multiple-choice questions should have counted for 36% of your midterm grades. Here, it’s best not to think of the question format contributing that much to your grades. Rather, think of the balance of learning objectives. Just over a third of your grade was determined by your understanding of course concepts, and almost two-thirds of your grade depended largely on your computational and problem-solving skills. I assert that’s a reasonable balance. You may be used to other courses in which your computational and problem-solving skills generated most or all of your grades, but I would argue that courses that do not value conceptual understanding do you a disservice. Without conceptual understanding, all those computational skills are just recipes to follow. You’ll have difficulty remembering how to use those recipes or adapting them to messy, real-world problems if you don’t have a good conceptual foundation. That’s why I emphasize conceptual understanding to the extent that I do.

As I said above with data visualizations, I hope these explanations help you understand why I made some of my teaching decisions this spring. I made those decisions in deliberative, intentional ways for the most part, and my intentions were to provide you with a quality education. I know I still have some room to grow in my question-writing skills, and I’ll strive to write better test questions in the future.

Thanks for your comments, and best of luck with your courses (or jobs or job searches) this fall.

Health Data Visualization Challenge

I just noticed that visual.ly has been running a health data visualization challenge in recent weeks. You have until Sunday, July 22nd, to submit visualizations of data of your choice from HealthData.gov. Both individual and group entries are invited. The prizes for first, second, and third place are pretty sweet, including books, technology, and conference travel.

Let me know if you submit an entry!

Course Feedback (Part 1)

Thanks to those of you who filled out the two end-of-semester course feedback surveys. Since you took the time to do so, I thought I would take a little time and share some of the results. First, a question from my survey:

To what extent did each of the following activities contribute to your learning in the course?

I’ve sorted these activities by the “very much” responses, from greatest to least. Working through problem sets was rated as the most helpful course activity, with 73.9% of the 46 respondents to the survey rating this activity as “very much” contributing to their learning. There were four activities (reading the textbook, studying for midterms, discussing clicker questions, and working on midterm corrections) that were essentially tied for second place, with “very much” ratings from 48.9% to 56.5%.

Some statistics here: The difference in the proportions of “very much” ratings for working through problem sets (73.9%) and reading the textbook (56.0%) was statistically significant with a p-value of .0359. The differences in proportions of “very much” ratings for the “second place” activities were not statistically significant. The lowest p-value for these differences in proportions was .2483.

The peer-to-peer activities were the least useful activities according to your ratings, although more than half of you found reading your peers’ project proposals at least moderately useful and exploring your peers’ bookmarks at least slightly useful. It’s possible these activities were rated as less useful because they weren’t required (with the exception of that one social bookmarking assignment). I was hesitant to require you to respond to your peers’ work in these areas, however, since this course already had enough required components. In the future, I’ll consider ways to upgrade any particular peer-to-peer activity (by integrating it more fully into the course without increasing the overall workload) or, failing that, to drop it from the course requirements.

As is clear from the question above, I try to engage students in my courses in a variety of learning activities. Inevitably, most students will find some of these activities more useful than others. One reason I continue to use such a variety of activities, however, is that different students respond differently to different activities. For instance, here are some survey comments on reading the textbook:

  • The book really explained things very well so before tests I would read through the chapter and understand how the concepts in class connected.
  • The OpenIntro Statistics book was very helpful. The language was clear and understandable with good relatable examples.
  • Reading the book before lecture. It kept me up to date (no long reading sessions before tests) it kept me sharp and on top of the material, and it made the lectures more interesting. That was definitely one of the largest positives of the course. The questions reinforced the reading, and made it so that you felt like you knew the material going in and truly knew the material going out of lecture.
  • Used several methods of teaching that I was unfamiliar with and didn’t find effective because I’m a visual learner who likes to see examples of things rather than just read the textbook.
  • The constant reading quizzes were also just too much and didn’t teach us anymore than just paying attention in lecture.

This is the main reason I gave you some flexibility in earning your class participation points–reading the textbook before class helps some students get more out of class, but other students prefer to have their first exposure to material occur during class. The former group of students could get much of their class participation credit through the pre-class reading assignments, while the latter group of students could do so through in-class clicker questions.

Here’s another set of comments, all about clicker questions:

  • It would have been nice to use clickers everyday so that those who were able to make it to every class would be rewarded.
  • The clicker questions were a good opportunity to get instant feedback on the material.
  • Clicker q’s- great way to introduce hard topics since it requires discussion with your neighbor and let’s you know you’re not the only one who doesn’t get it
  • One of the best parts of his teaching style is his use of non-traditional lecture methods like examples from the web, graphs, visualizations and thought stimulating clicker questions which keep the lecture varied and interesting and keep students awake. The fact that he engages the class in discussion also gives the course a more personal feel.
  • Clicker questions are useful but they take too much of class time.
  • Way too much class time spent on clicker questions and discussing why students chose their responses to said clicker questions.

What made class time engaging and useful for some students was seen as a waste of time by others. Given the role the clicker questions played in your class participation grades, I tried to use clickers at least briefly in most classes, although my primary concern was having you engage in learning activities during class that helped you make sense of the material. Often, clicker questions worked well for this, particularly around the more conceptual material. But other times different activities were more useful.

For instance, this student appreciated in-class worksheets:

  • I loved the classes when you handed out problems and we solved them. I learned 90% of what I learned in your course in those 4-5 classes. That was so incredibly effective, even though we rushed through them.

My instinct is to view “worksheet days” as a bit lazy. It doesn’t take much creativity or effort for me to find a few word problems, have you work them during class, and then go over them on the chalkboard at the end of class. And I tend to see the “going over them on the chalkboard” bit as a little tedious, both for you and for me. But, clearly, this routine works for some students and some topics, particularly the more computational topics. I’ll have to get over my bias and use this approach a little more often in future offerings of the course.

I’ll wrap up this post by pointing out something that sounds obvious but is worth saying from time to time: My job is to teach all the students in my class. I could use just a couple of teaching methods that work really well for some students, but that runs the risk of leaving a lot of other students out of luck. So I use a variety of methods in the hopes that every student responds well to at least one or two of them. In general, I think this works. Your “overall rating of the instructor” had a 4.41 average, and your “overall rating of the course” averaged 3.86, both improvements over the last time I taught this course, back in 2008. (The improvement in instructor rating was statistically significant, p=.0537, although the improvement in course rating was not, p=.2206.)

There were two themes in the course feedback I haven’t addressed here: a perceived over-emphasis on data visualization and frustration with the multiple-choice exam questions. I’ll address those topics in a future post.

Final Exam Study Guide

Here’s a study guide for the final exam. Let me know if you have any questions about the study guide or the final.

Here are my office hours for this week:

  • Wednesday, April 25th, 3:30-5:30, CFT office
  • Thursday, April 26th, 2:30-4:00, FGH atrium
  • Friday, April 27th, 10-11, FGH atrium

Also, Travis and Xiaoyu are holding a review session on Thursday, April 26th, from 5:30 to 7:00 in FGH 200. They’ll be focusing on solving more computationally oriented problems.

Clicker Questions for Chapters 5 and 6

And here are clicker questions from Chapters 5 and 6, concerning inferences for proportions and small-sample inferences. Check the last page of the PDF for answers.

Check back here soon for a study guide for the final exam, as well as solutions for the second midterm. Let me know if you have any questions about the final or if there are other materials I can provide that would be helpful.

And the Winners Are…

I’ve tallied the results of today’s voting, and I’m pleased to announce the winning infographics in each of the three categories.

That means two wins for Hukkelhoven, Sanicola, and Thoni, and a win and an honorable mention for Cummings, Getz, and Weaver. Congrats to all of the finalists!

Image: “Blue,” Kaytee Riek, Flickr (CC)