Takeaways from Six Conversations on Teaching and AI
With this month’s double header, the Intentional Teaching podcast has now devoted six episodes to the topic of teaching with and about generative artificial intelligence (AI). Since we’re about a year and a half out from the launch of ChatGPT, I thought it would be a good time to reflect on those six episodes and share what those interviews taught me about teaching and AI.
In my November 2022 interview with Robert Cummings from the University of Mississippi, posted just a week before the launch of ChatGPT, Bob pointed out that until this point in human history, whenever we encountered writing, there was some kind of human thought behind it. Maybe not great thinking, but some kind of thinking. Now we have AI that can generate writing that has been “divorced from thought.” And that’s going to take us some time to get our heads around.
Bob also introduced me to the story of the album Chain Tripping by the band YACHT. Lead singer Claire Evans has talked about the band’s process for creating that album, which involved feeding their existing music into an AI tool and then asking it to generate more music in the style of YACHT. They used the outputs of the AI to create the new album, remixing and adding to the AI-generated lyrics and music as they liked.
I think that hearing that example before I was introduced to ChatGPT helped me see the tool differently, as a tool for creative expression and not as some kind of question answering machine, something I blogged about later.
When I realized that the field of computer science education had been working with and around AI code generators for more than a year before the rest of us had to content with ChatGPT, I wanted to talk to someone from computer science who could speak to that. Thanks to listener and CS professor William Turkett, I found Brett Becker of the University College Dublin. Brett pointed out in his April 2023 interview that the kinds of programming assignments in introductory computer science courses are ones that AI tools like ChatGPT can handle quite well. That would include asking students to write code that accomplishes certain tasks, but also describing what a piece of code will do or finding the error in non-functional code.
What to do about that? Brett said that many computer science educators and researchers are leveraging these tools, not hiding from them. And that would mean changing learning objectives, especially in introductory courses. In his co-authored white paper, Brett and colleagues argued that intro courses might need to focus more on reading and evaluating code than writing code. That’s because the AI can often write the code for you, but you still need some expertise to make sure the code it generates is functional, efficient, and secure.
I’ve shared that argument in many workshops since then, noting that we might have to make similar decisions in other disciplines where generative AI can take on relatively simple tasks, like polishing the syntax and grammar in a paper. That’s not news that everyone takes well, but it is something that many fields will need to figure out as these tools are more well known and move into the mainstream.
In her 2022 book Remembering and Forgetting in the Age of Technology, Michelle D. Miller writes about the “moral panics” that often happen in response to new technologies. In his 2013 book Cheating Lessons: Learning from Academic Dishonesty, James M. Lang argues that the best way to reduce cheating is through better course design. I knew these two would have some thoughts on teaching with and about generative AI, and I wasn’t wrong!
I asked Michelle if higher education was in the middle of a moral panic over generative AI, and she said that was a big yes. Moral panics are characterized by hyperbolic statements about the new thing, whether those are positive (“AI will bring a glorious new future of equitable education!”) or negative (“AI will destroy higher education as know it!”). Michelle noted, “At the risk of possibly missing out on some some big trends… my first reaction to a lot of this is let’s take a breath.” The actual impact of new technologies is usually somewhere in between.
Later in the conversation, Jim made an argument for unbundling our assessments in light of generative AI. He noted that most assignments involve a mix of skills, like a research paper that involves finding and evaluating sources, summarizing arguments and examples, structuring a thesis, revising one’s writing, and more. Instead of thinking of that paper as one monolithic assignment, we’ll want to consider each of those skills individually and evaluate whether or not students need to develop expertise in that skill or it’s one where they can lean on AI as a shortcut.
I should note that Robert Cummings and his Mississippi colleagues experimented early with generative AI in writing courses and landed in much the same place. They write about their approach to unbundling in a new paper.
One skill we often want students to develop is that of creativity. We don’t always think of creativity as a skill that one can build, but design thinkers do. They have a variety of techniques to help themselves and others increase their creativity. When I learned that Garret Westlake, who teaches design thinking at Virginia Commonwealth University, was using generative AI in his design thinking work, I asked him on the podcast to share what he’s learned.
Garret shared that as a child he was diagnosed with dyslexia and dysgraphia, making it hard for him to spell and to write by hand. His school allowed him to type his work on a computer, but he was clearly warned that any use of the computer’s spellcheck would be an honor code violation. He found himself choosing to use suboptimal words in his writing, just because he could spell those words. Garret argued that generative AI might end up being like spellcheckers, where the current prohibitions against it in education will turn into encouragements to use it over time, and that could be a good thing for creativity.
Garret spoke about AI’s potential roles in prototyping, where you might have an idea and need a quick way to visualize that idea so you can refine it. He also talked about AI in brainstorming, sharing a recent experience where he had a room full of faculty brainstorm ideas around some topic, then asked ChatGPT to generate ideas on the same topic. There was a lot of overlap between the two lists, but ChatGPT listed a few ideas the faculty hadn’t thought of, which spurred useful conversations.
Given all the criticism generative AI has received for echoing biases found in their training data, I found it very interesting that Garret argued that ChatGPT might provide a perspective or two not found in a group of relatively homogeneous faculty!
Earlier this month on the podcast, I talked with Sravanti Kantheti, who teaches anatomy and physiology at Lanier Technical College. I wanted to learn about her experiences using Top Hat Ace, an AI-powered learning assistant that’s now part of the Top Hat teaching and learning platform. ChatGPT can answer student questions drawing on information across the internet, which can lead to some odd results, but what about an AI chatbot that’s been trained on the learning materials in your own course?
Sravanti reported that Ace does a great job answering student questions about course material, in the sense that the answers it provides are correct and appropriate to the level of the course. Her topic, anatomy and physiology, can be taught on a third grade level or a medical school level, but she needs her students to encounter explanations that work for beginning undergraduate students, and Top Hat Ace does just that. She also said it does a great job summarizing and outlining material in her course, something her students leverage when studying for exams.
We got into an interesting discussion about the value of outlining material. Do students lose a learning opportunity when the robot does the outlining for them? Maybe. I’m hoping to see some educational research on that soon. In the meantime, however, I think there are use cases for an AI reading assistant that make a lot of sense.
This brings us to last week’s podcast episode featuring Pary Fassihi of Boston University. Pary is part of a pilot program at Boston helping instructors explore the use of AI in their teaching, and she has developed a number of creative and thoughtful AI activities and assignments for her writing and research courses. She has found it useful to have ChatGPT serve as a kind of AI reading assistant for her students, making it easier for students to tackle new readings during class, since ChatGPT can help them get the basics of an essay or article very quickly.
Pary’s other assignments are equally as interesting, from having ChatGPT provide a kind of targeted peer review on student writing to asking students to use AI image generators like Adobe Firefly to create art inspired by particular artists as a way to dig into the question “What is art?” One thing I really appreciated about how Pary talked about these activities was how cognizant she was of all the possible ethical questions these uses of AI bring up, and she confronts them head on with her students.
Pary regularly shares about her teaching-and-AI experiments on LinkedIn, and if you’re not following her there, you should!
That’s it for my review of Intentional Teaching episodes focused on AI. If you know of other people I should bring on the podcast to talk about these topics, please tell me about them!