Learning through Elaboration (and AI Chatbots)

I publish a newsletter called Intentional Teaching most Thursdays. What follows is one of my favorite pieces of newsletter writing from the summer, and I wanted to give it a wider audience here on the blog. I hope you find it interesting! You can sign up for the Intentional Teaching newsletter here.

Last week during a virtual faculty presentation on generative AI and teaching, I shared the ChatGPT assignment that Yale Divinity School professors Tisa Wenger and Erika Helgen used in the spring. This was the assignment, described in a newsletter earlier this year, where students asked ChatGPT to write sermons and speeches from certain historical perspectives and contexts and then wrote papers critiquing ChatGPT’s not-so-great attempts to do so. I really like this assignment because it leverages what ChatGPT is good at (writing in a particular genre) and what it’s not good at (writing something that’s not woefully generic).

When I read through the text chat after my presentation, I saw this comment from a faculty participant: “Why on Earth would anyone need a sermon written by a robot. How psychotic.” He followed that comment with this: “Every minute those students spent reading ChatGPT content they could have been reading the work of a brilliant human.” That is technically true, but at the risk of sounding a little defensive, I’d like to unpack the Yale assignment and connect it to some of what we know about how learning works.

I read Remembering and Forgetting in the Age of Technology by Michelle D. Miller this summer while traveling on airplanes. (I do most of my reading on airplanes, which is why I read so few books in 2020.) The book is a fantastic explanation of how memory works and how technology can inhibit or enhance memory. Michelle has a section in the second chapter on “deep processing,” where she writes, “This intensive thought, or ‘deep processing,’ helps drive the content into long-term memory… spotlighting the plain fact that simply being in the presence of course materials in no way assures that it will be remembered.”

Michelle argues that “coaxing” students into thinking about material on a “deeper level” helps them learn that material. That’s one of the things that the Yale assignment does, by asking students not just to read primary and secondary historical texts but also to apply those texts to an evaluation of sermons and speeches written by a robot. The assignment calls for the kind of deep processing that will help students understand, retain, and be able to use the historical information found in those texts.

As another example, I learned about a website called character.ai from Kelly Rivera, professor of political science at Mt. San Antonio College. Kelly reports that her students aren’t always super enthusiastic about reading and engaging with historical political documents like The Federalist Papers, written in the 1780s to support the ratification of the United States Constitution. She also reports that her students have been very excited to chat with an AI James Madison, one of the authors of The Federalist Papers, about his writings and his political ideas.

Character.ai has a host of AI chatbots patterned after historical figures, fictional characters, modern-day influencers, and more. Their James Madison bot introduces itself by saying, “I am President James Madison, for me it is still 1836 and I am still alive. It is though I have come to you in a time machine to talk to you armed only with my personal knowlege of my own time.” (It spells knowledge without a “d” and I’ve just spent more time than I should have trying to figure out if that was a common spelling in the 1830s.)

Asking students to chat with an AI James Madison about one of his most famous works is a way to coax those students into deep processing about that work. Conversation is a great way to move students into deep processing, and the AI James Madison just happens to be tuned to engage in conversations relevant to Kelly’s course material. It’s not actually enough to read the work of a brilliant human, students need to do something constructive with what they’ve read, and I can see Kelly’s AI assignment providing that opportunity.

The photo above is one I took in early 2020 of my wife Emily at the Hunter Museum of American Art in Chattanooga, Tennessee. She’s taking a photo of a piece called The Wreck of the Old 97 by Thomas Hart Benton. Emily is a fan of Benton, particularly his compositions and his way of depicting movement. The painting depicts a famous train accident, and while I was taking my photo of Emily, I kept wondering if the band called the Old 97s was connected. (I saw them in concert in Boston years ago with a friend.) A quick Google search let me know that there is a famous country ballad about the train accident, a song covered by Johnny Cash among others, and that’s where the Old 97s got their name.

Why do I know all this about this painting and the train wreck it depicts? Because Emily and I stood in front of the painting for 10 or 15 minutes talking about it. We were engaged in deep processing, sharing our different perspectives on the painting and making connections from the painting to our prior knowledge and experiences. There’s another term from the learning sciences that I like for this deep-processing-through-conversation, and that’s elaboration, sometimes called elaborative interrogation.

AI chatbots certainly aren’t the only way to help students engage in the kind of elaboration that’s useful for learning, but they do seem to provide an option for that kind of learning activity. Thanks to my colleagues at Yale and Mt. SAC for sharing these AI-powered assignments.

The above was originally published in my Intentional Teaching newsletter on August 31, 2023. You can sign up for the Intentional Teaching newsletter here.

Leave a Reply

Your email address will not be published. Required fields are marked *