This post is getting some traction, so I wanted to let readers know that I run a newsletter and podcast called Intentional Teaching. I explore topics like AI and teaching regularly in those venues, so subscribe today!
Back in 2020 during the first wave of the COVID-19 pandemic in North American higher education, there was a moment around the middle of the summer when the fall semester started to come into some kind of clarity. For April, May, and June, it was easy for most instructors to put the fall semester out of their minds. They were busy wrapping up the most challenging spring semester ever, and the changing nature of the pandemic and pandemic precautions meant it was hard to predict what the fall would be like.
Then July rolled around, and, at least on my campus, it was clear that many of us would be teaching classes in a particular and particularly challenging modality: “Zoom in the room,” one colleague called it. Some students in the classroom, masked and physically distanced, and some attending virtually through Zoom. I remember that short window of time where we all seemed to realize we would have to figure out how to teach like we had never taught before!
This summer, we seem to be having a similar moment, realizing that teaching in the fall will have to look very different for reasons outside our control. Thanks to ChatGPT and a legion of other generative AI tools, we are going to have teach this fall, once more, like we have never taught before. We have had a few months to wrap our heads around what these AI tools can do (although that set of capabilities continues to change) and the fall semester is just close enough that faculty and other instructors are realizing they need to attend to the hard task of revising our fall assignments and assessments in light of these new tools.
As usual in all things AI and education, the University of Pennsylvania’s Ethan Mollick is ahead of the curve. On July 1st he published “The Homework Apocalypse” on his Substack blog and newsletter in which he considered three very commonly used assignments — the essay, the reading response, and the problem set — and showed how ChatGPT and Bing and other AI tools are going to force rather dramatic change in these assignments. This week, I’ve been thinking a lot about one of those assignments, the reading response, and I would like to kick around Mollick’s suggestions for how to modify such assignments.
Mollick notes in his piece that you can give Bing a PDF of an MBA case study and ask it to summarize it as if it were an MBA student. And Bing does a pretty great job of that, even providing a comment for a hypothetical student to make in class were they to be called on. Mollick then used another AI tool, Claude, which can accept much longer inputs, to summarize a 30,000-word book that Mollick wrote a few years ago. Again the AI tool did a great job, without any errors. If an MBA student would like to avoid doing the reading, these tools can give them pretty useful summaries they can use to participate in class discussions.
I don’t actually know how a typical MBA professor holds their students accountable for doing the reading and coming to class prepared, but I know how I tend to do that. Take, for instance, the cryptography course I taught at Vanderbilt. We used The Code Book by Simon Singh as our course textbook. It’s a popular math book (that is, one you might find in a commercial book store), it’s very readable, and it’s cheap compared to academic textbooks. I would ask my students to read a chapter from the book and respond in writing (on a blog or on Teams) to one or more reading questions designed to point them toward the big ideas in that chapter. I would then use their responses to structure the class discussion of that chapter, even showing selected responses on the big screen for us to talk about together.
Here are three reading questions I’ve used in the past for this purpose:
- When the Zimmerman telegram was deciphered by the cryptanalysts of Britain’s Room 40, Admiral William Hall decided not to tell American President Woodrow Wilson about its contents because doing so might let the Germans know that Britain could break their codes. Given the danger posed to America by the unrestricted U-boat warfare indicated in the telegram, was this ethical of Admiral Hall?
- Germany learned that Britain had broken their codes from histories of the First World War written by Winston Churchill and the British Royal Navy. Given that this knowledge prompted Germany to invest in the Enigma machine technology prior to the Second World War, should these histories have been published? What might have motivated Britain to make their code-breaking success known in this fashion?
- Given the various incidents recounted in this chapter, what are some conditions that seem favorable to the advancement of military cryptography?
And here is how Bing’s AI-powered chat tool (free for anyone to use) answered these very questions:
Those are all good answers. They’re actually longer, more detailed, and less interesting than the responses my students typically write, but these content-focused reading questions are definitely softballs for AI text generators like Bing and ChatGPT.
What does this mean for reading response assignments this fall? Mollick presents three options:
- Keep the same basic approach to reading assignments, but test any reading assignment in advance to see how well they are processed by AI (make sure to use the latest models). Focus assignments on topics the AI does not answer well.
- Design assignments so as to limit the AI to helping with understanding and preparation. This can be done by having readings serve as the basis for in-class discussion. To lower AI-driven work, do not disclose the exact topic of discussion in advance.
- Ask the students to engage with the AI, checking the AI answers for errors and expanding on good or bad points the AI makes. Using AI as a reading partner and tutor has a lot of potential, but requires experimentation.
For my cryptography course, Mollick’s first option would probably mean throwing out all my existing reading questions. My intent with these reading questions was noble, that is, to guide students to the big questions and debates in the field, but those are exactly the kinds of questions for which AI can write decent answers. Maybe the AI tools would fare worse in a more advanced course with very specialized readings, but in my intro to cryptography course, they can handle my existing reading questions with ease.
What about option two? I think one version of this would be to do away with the reading response assignment altogether. Just tell students to do the reading and hold them accountable for doing so through in-class discussion of “surprise” questions, that is, questions students don’t get in advance. I’m not crazy about that version, since I think students do need help in making sense of course readings. Not having any guidance would be challenging for some students, although they could ask Bing or ChatGPT to summarize the reading as a self-check on comprehension or to identify potential discussion topics from a reading. Some students would also experience unnecessary anxiety by not knowing the class discussion topics in advance, which seems like an undesirable difficulty.
Another version of option two might be closer to what Rosemary McGunnigle-Gonzales does in her sociology courses at Hofstra University. I met Rosemary at a conference at Hofstra this winter, and she will be a guest on an upcoming episode of my Intentional Teaching podcast. As she shares in the podcast interview, Rosemary engages her students during class sessions in a variety of creative ways, and I asked her how she ensures they come to class ready to participate. One tool she uses is what she calls a “reading memo.” Functionally, it’s a lot like my reading question assignment, with students submitting their memos before class, but the kinds of questions she asks are very different than my reading questions. Here are a few:
1.What new thing have you learned?
2.What new thing can you explain?
3.What new question do you have?
4.What random something caught your attention?
Those are pretty great questions, right? Rosemary described them as inviting students to make the kinds of cognitive moves that a sociologist might make as they take in a new reading. It reminds me of the “structured reading groups” approach I learned about from two other sociologists, Heather Macpherson Parrott and Elizabeth Cherry. They also ask their students to make particular cognitive moves with the reading, like looking for connections to other course material or playing devil’s advocate with the author.
I shared Rosemary’s reading memo in a coaching session with New Mexico State University instructors earlier this week, and I argued that these are questions with which ChatGPT and its peers might struggle. You can’t just cut-and-paste these questions into ChatGPT and get useful answers. The questions are also more personal than my reading questions: “What new thing have you learned? What new question do you have?” If I were to do an assignment makeover for my cryptography reading questions, might Rosemary’s memo approach provide a model?
After that coaching session, one of the participants, Matt Huycke, took up my challenge. He didn’t cut-and-paste Rosemary’s questions into ChatGPT, but he did use ChatGPT’s Playground mode to ask the tool to write like a third-year undergraduate in an anthropology program with a concise writing style. Then he asked ChatGPT to respond to a series of questions about Discipline and Punish: The Birth of the Prison by philosopher Michel Foucault (not the pendulum guy, the other Foucault). using questions like Rosemary’s memo questions. Here are a couple of examples:
Matt: What was one specific concept that finally “clicked” for you?
ChatGPT: One specific concept that finally “clicked” for me was Foucault’s idea of the “Panopticon,” inspired by the architectural design of a prison by Jeremy Bentham. The design allows a single watchdog to observe all inmates, without them knowing whether or not they are being watched. Foucault uses this to demonstrate how power relies on visibility and control. The Panopticon serves as a metaphor for modern social order; people conform to social norms due to the internalized fear of being watched or scrutinized. While it’s still a complex concept, I feel that I’ve gained a firmer grasp.
Matt: What random something caught your attention?
ChatGPT: One thing that caught my attention was how Foucault compares institutions like prisons, schools, and hospitals. On the surface, these seem like very different entities – one for punishment, one for education, one for health. Yet, Foucault argues they all employ the same basic mechanisms of control and power. Despite their differences, they are created to manipulate and shape human behavior. This way of looking at it is a provocative departure from conventional modes of thinking about these institutions, and it has definitely made me view them in a new light.
Matt: Can you ‘dumb that down’? I don’t think I understand exactly.
ChatGPT: Sure, I’ll simplify it. Foucault suggests that places like schools, prisons, and hospitals all have similar goals. They all aim to make people behave in a certain way – students to learn, prisoners to reform, patients to recover. These places use rules, monitoring, and systems to make this happen. Foucault thinks these are methods of control and power, shaping how people act and think. It’s like these places teach us how to behave, and that’s a form of power.
Okay, so with a little prompting, ChatGPT can answer questions from Rosemary’s memo assignment with some degree of utility! Yikes.
Where does this leave us? Let’s return to Mollick’s second option and this key sentence: “This can be done by having readings serve as the basis for in-class discussion.” The format of the reading response we ask of students isn’t as important as what happens next, that is, what happens during class with those readings. If I give my students my old reading questions or Rosemary’s memo questions and then I spend the entirely of class lecturing over the material, why should my students take the time to write their own reading response? From their perspective, it’s just busy work that goes into some black hole, never to be seen or used again.
However, if I’m going to use the answers that students provide to pre-class reading questions during class, maybe building class discussion around those answers or having students prepare answers to specific questions for use in small-group discussion, then there’s a reason for students to do that work themselves. Sure, maybe they’ll get an assist from ChatGPT or Bing, but they’ll still need to have some level of ownership over their answers, if they’ll be sharing and elaborating on those answers during discussion.
That’s where I think Rosemary’s questions are more useful than mine, since her questions are so personal. Imagine the student who did what Matt did and had ChatGPT help them craft answers to those personal questions. That student will need to study ChatGPT’s suggestion for a “random thing that caught your attention” so they can share it during class. Even if they didn’t do the entire reading, that’s still valuable preparation for class. And those goes back to the goal of a reading assignment. My goal is to have students make some sense of the reading on a first pass and to come to class prepared to go deeper. With that goal in mind, my big-picture, content questions provide one way to reach that goal, but there are other ways to get there, ways where AI tools might be a little less disruptive of the learning process.
I’m reminded of the Chronicle of Higher Education essay that Anna Mills wrote back in March. She had been beta testing AI tools that are now available to the rest of us, thinking about how they would affect the kinds of writing assignments we use in higher ed. Her conclusion was that we’re going to need to focus on motivation and the writing process itself. Here are her four pieces of advice, framed as for writing assignments, but imagine as you read them applying to any kind of assignment you might give students:
- Assign writing that is as interesting and meaningful to students as possible.
- Communicate what makes the process of writing valuable.
- Support the writing process.
- Focus on building relationships with students as a way to help them to stay engaged.
That’s good advice as we spend this month and next remodeling our assignments for a fall semester full of generative AI tools.
P.S. I didn’t address Mollick’s third option, having students critique AI output, but here’s a great example of that approach used on a math problem from Robert Talbert.
Update, 7/18/23: In John Warner’s April 21, 2023, blog post, which I just now read, Warner suggests a different response to the challenge outlined above that may sometimes be useful:
“Consider not assigning out-of-class writing as part of an assessment where the purpose is merely to elicit the capture and understanding of pre-existing knowledge or information. Given that ChatGPT can achieve this quite easily, and students may not see the value in learning the material for themselves, the conditions for academic dishonesty may be ripe. It’s possible that a well-designed in-class multiple-choice assessment is a better fit for the assessment objective.”
A short quiz over the reading at the start of class might not provide much help to students in making sense of the reading, but if the goal to make sure students have gotten certain basic information out of the reading, such a quiz might be a smart move this fall.
That reminds me of another approach, which also involves changing the kind of reading assignment, not just the questions: Asking students to collaboratively annotate the reading in a tool like Perusall or Hypothesis. See my post on asynchronous active learning through social annotation for ideas on how to do this.