Assessing and Representing the Impact of CTLs

Thanks to the accountability provided by our professional development meetings at CETL, I’ve finished Mary C. Wright’s new book Centers for Teaching and Learning: The New Landscape in Higher Education, in which she reports on an extensive study of the websites of centers for teaching and learning (CTLs) in the United States. What aims do CTLs have and who do they serve? What strategies and tactics do they employ? You can read my thoughts on chapter three of the book, about CTL programs and services, in an earlier blog post. In this post, I wanted to share a few reactions to the research Wright reports in chapter five.

In chapter five, Wright takes a deep dive into the 100 or so annual reports she found in her survey of CTL websites a few years ago. Those annual reports represent only 9% of all CTLs she identified, so it’s a little hard to draw general conclusions from her study, but it’s still very interesting. She wanted to know how CTLs represent the work they do and how they report on their impact. The primary way she tackles this question is to categorize the information in the annual reports using Kirkpatrick et al.’s now-standard set of categories for CTL assessment: participation by instructors in CTL offerings, satisfaction reported by those participants, participant understanding or beliefs about teaching, changes in participant teaching practice, impact on student learning, and institutional impact.

For example, 27 of the annual reports Wright analyzed included counts of individual faculty participants in their offerings. Wright compared those counts to published faculty headcounts, and she determined that on those 27 campuses, 51% of faculty participated in CTL offerings in the given year. The rate at doctoral institutions is a bit lower at 47%. Wright makes the case that these “reach” rates are useful metrics, echoing an argument I’ve made many times, that if faculty didn’t find the services of a CTL useful, those metrics would decline over time. The average rates Wright cites, however, seem pretty high to me. We were hustling at the Vanderbilt Center for Teaching, and I don’t think we over broke 40% of full-time faculty on our campuses. It’s possible Wright’s different denominator (IPEDS data on faculty counts) accounts for the difference, or that she had a couple of high outliers in her data set skewing the average. I worry a bit about using her stats as benchmarks, since I don’t think many CTLs could hit those reach numbers.

Something I noted in my post about chapter three is that there’s not much attention in this book to the work that CTLs do directly with departments, whether that’s an invited workshop for faculty in a department or working with department leaders on a core course or curriculum revision. Since chapter three summarized the programs and services that CTLs advertise on their websites, it made some sense that such work might not be represented in Wright’s survey, since CTLs might advertise department-based work through other channels, like direct meetings and outreach with department leaders.

But here in chapter five, I would have expected to see department- and other unit-based work to show up in the annual reports of CTLs. I know we tried to represent that work in the Vanderbilt CFT annual report, likely one of the reports included in Wright’s analysis. She does mention that the CTL at the University of Texas-Austin identified 70 “engaged departments” in their annual report, where such departments met at least one of three criteria for involvement in CTL offerings (hosting a curriculum revision initiative, reaching a certain threshold of workshop participants, receiving faculty grants from the center), and they apparently created a fun infographic showing the potential impact on students of work done with these engaged departments.

I’m glad that some annual reports mention this kind of work, but I’m surprised that there’s not more of this work reflected in CTL annual reports. I have found a shift from working with individuals to working with departments to be critical to a CTL’s continued growth in both reach and impact. That was true at Vanderbilt, it is true at the University of Mississippi, and it’s come up multiple times in the external CTL evaluations I’ve done. I would advocate for CTLs to find more ways to represent that work to internal stakeholders (like deans and provosts) and more widely.

As for other forms of assessment, Wright notes that direct measures of faculty learning and application (to their teaching) are rare in her survey, likely because of the resources required to do this work. I would agree with that! We did some of that at Vanderbilt, but only occasionally for key programs, and we were a well-staffed center. The Vandy CFT gets a shout-out in this section of the chapter for the “active learning cheat sheet” developed by a faculty learning community led by my colleague Cynthia Brame. That cheat sheet served double duty as a way of assessment participant learning and a resource useful for other instructors. We often produced such resources through our learning communities, sometimes with participants assisting in the writing, sometimes not.

As for assessment of changes in student learning as a result of changes in faculty teaching practice as a result of faculty participation in CTL offerings, as you might imagine, there’s not a lot of that in the annual reports that Wright analyzed. Not only does that require fairly specific conditions and resources to be possible, it’s also fairly distal from the core work of CTLs. Wright also quotes a survey respondent from a previous study with a fun spin on an argument that CTLs don’t need to directly assess student learning: “We know Clorox bleaches. We don’t have to re-study this before we do every wash.” That is, we know certain teaching practices improve student learning, so if we’re helping instructors adopt those practices, we don’t have to re-prove that those practices are helping. That would indicate that assessment of faculty learning and application is sufficient.

As mentioned, Wright uses the Kirkpatrick assessment model here (participation, satisfaction, and so on), but she proposes a new, seventh category: external influence or sharing. This would include publications, presentations, and participation in educational development consortia by CTL staff. I would also include website teaching guides, podcasts, and other non-peer-reviewed scholarly output. I heard Mary mention this on the Centering Centers podcast, where she said some of her CTL director colleagues are getting institutional pushback around that work, getting the message that we should be spending our time internally on programs and services for our own institution. “As our field moves from not just craft- or practice-focused to a scholarly approach,” Mary said in that interview, “it is extraordinarily important for us to be doing this external work, not only for benchmarking purposes, but also to be more cosmopolitan in our practices, to have these external communities and reference points.”

I am reminded of some advice that Matt Kaplan, who directs the CTL at the University of Michigan, gave me when I assumed the Vanderbilt CFT director position back in 2011. He said I should find out what kind of assessment my direct supervisor wants to see and to focus my assessment efforts in that direction. Why? Because that supervisor presumably has a great deal of influence on your budget and staffing and other resources, so you need to make sure you’re making a case for your effectiveness to that person. For many years at Vanderbilt, it was that reach statistic mentioned above that my supervisors were interested in, so our team built out the necessary infrastructure to track participation across our programs and services in a way that we could compute that statistic.

However, in my last year as director, it was not clear at all what kind of assessment my supervisors were interested in, in spite of my best efforts to ask for that information. It became clear, however, that they definitely weren’t interested in Mary Wright’s seventh category, external influence or sharing. I was told upon the cancellation of our six-year-old podcast, Leading Lines, that “Vanderbilt’s approach to supporting faculty is active, direct engagement. A podcast is neither of those.” That logic could be applied to all of the external facing work we did, from scholarly publications to conference attendance to our famed teaching guides on our website.

While I agree with Wright’s point that this kind of external work enhances the on-campus work we do as CTLs, it’s clear that some administrators still need to understand that connection. I hope that Wright’s book, and her argument for this seventh category of assessment in chapter five, provides some of that understanding. In the meantime, I think Matt Kaplan’s advice is still very good advice.

Leave a Reply

Your email address will not be published. Required fields are marked *