The latest issue of the American Mathematical Society’s Notices includes an essay by Steven Zucker [PDF], a mathematician at Johns Hopkins University. In the essay Zucker argues that student course evaluations are not appropriate for evaluating the effectiveness of teaching and that the use of “surveys” to do so “pushes us to dumb down our courses” in an attempt to make students happier by making courses easier.
I agree with Zucker that “surveys provide information only about certain things.” Using multiple methods is appropriate when evaluating teaching. (See, for example, Richard Felder’s take on evaulating teaching [PDF].) However, Zucker also claims that “Student performance [learning] is largely independent of instructors’ ratings,” which isn’t supported by the research as I understand it. For instance, see this lit-review-as-FAQ from the University of Michigan’s teaching center.
How are instructors “dumbing down” their courses? Zucker’s list includes “building the subject slowly from the bottom up,” “giving lots of examples in class,” “dropping topics from the syllabus when convenient,” and using homework problems as “models” for exam problems. I disagree. I consider scaffolding learning, helping students learn inductively, focusing more on depth than breadth, and aligning practice with assessment to be elements of effective teaching. I may be misreading Zucker’s claims here, but I don’t think so.
I also find questionable Zucker’s advice to “accept that most of the learning takes place outside of class.” I think that the classroom is an under-utilized venue for learning, which is why I’m such a proponent of using classroom response systems and other student engagement techniques.
What are your thoughts? Does Zucker’s opinion piece reflect common faculty perspectives on your campus?
Image: “Survey” by Flickr user Mars Hill Church Seattle / Creative Commons licensed