Two Ideas for Teaching with AI: Course-Level GPTs and AI-Enhanced Polling

Last month I attended the POD Network conference in Pittsburgh. POD stands for Professional and Organizational Development, and it’s the big professional society in North America for teaching center staff like me. I have a lot to share from the conference, both here on the blog and in my newsletter, and I’ll start with a session that considered the future of generative AI in higher education.

The session was titled “Forecasting How AI Could Change the Work of Teaching Centers,” and it was facilitated by Marc Ebenfield and Karl Carrigan from the University of New England. They took a futurist approach to the topic, walking us through a tool called signals and drivers to hep us imagine possible futures for AI in education. This tool involves identifying signals, that is, discrete stories or examples happening now that are worth exploring, as well as drivers, that is, observable long-term trends. Everything I know about futurism I learned from Bryan Alexander, and this kind of tool is right in line with the work he does. The goal isn’t to predict the future, but to imagine possibilities and explore their implications.

Our discussion of future possibilities was interesting, but I want to share a couple of possibilities raised during the session that are already here. One participant asked, “Might we see course-level GPTs?” referring to OpenAI’s new tool that allows users to create their own AI-powered chatbots called GPTs that are trained in particular domains or respond in certain ways to prompts. For example, OpenAI provides a GPT called “Game Time” that says it will teach you how to play board and card games. My University of Mississippi colleague Marc Watkins whipped up a GPT called “Are You a Witch?” that accuses the user of witchcraft (of the pop culture variety) and insists on the user solving a riddle before answering any questions.

Might we see course-level GPTs, where the chatbot is familiar with the content in a particular course and can help students navigate and maybe even learn that material? The answer is, yes and they’re already here. Top Hat recently launched Ace, an AI-powered learning assistant embedded in its courseware platform. An instructor can activate Ace, which then trains itself on all the learning materials in the course. Students can then use Ace as a personal tutor of sorts, asking it questions about course material. Since Ace knows the course content, it can respond helpfully to poorly formed questions and it’s less likely to “hallucinate” incorrect information. Ace is trained not to just give out answer but to act like a tutor, asking students questions that will lead students to their own answers. I haven’t tried Ace yet, so I don’t know how well it works, but I talked with some of the Top Hat folks behind the AI assistant, and they’re being thoughtful about how they build Ace as a learning tool.

One caveat for Top Hat Ace: The more learning materials you put into your Top Hat course, the more effective Ace will be as a learning assistant. The use of this kind of course-level GPT depends a lot on how much you use your learning management or courseware system.

DA
“a complicated machine sifting through thousands of handwritten notes” via Midjourney

Back at the POD session, another participant noted that generative AI might be able to provide more actionable data for instructors in real time. We were talking about uses of AI like learning assistants and tutors, which could report to instructors about the kinds of questions students are asking. But when I hear “real time” I think of the synchronous class session, whether that’s on-site or online. I’ve seen how effective classroom response systems can be at engaging students in active learning, but those systems are largely limited to multiple-choice or short-answer questions, especially in larger classes, where a bar graph or word cloud can provide a quick summary of responses. If students are responding to a question with a sentence or short paragraph, it’s hard for an instructor to parse those responses quickly during a live class session. It’s not impossible (see, for instance, the use of ready-set-go questions, often called waterfall questions, in a Zoom chat) but it takes time.

What if I could ask my students a free-response question and then have an AI assistant read and summarize the students’ responses? This is the kind of thing that ChatGPT and similar tools are actually pretty good at, given their facility with natural languages. For example, I’ve used ChatGPT to summarize open-ended survey responses when I didn’t need a rigorous analysis of those responses, just an identification of key trends. I don’t know of any classroom response system that currently integrates this kind of AI assistance, but it wouldn’t be too hard to cobble together this kind of tool yourself.

For instance, during an online workshop I facilitated this past summer, I asked the faculty participants to move into breakout groups to discuss and respond to a couple of questions on a shared Google spreadsheet. In response to “What questions or concerns does your group have about AI and teaching?” I received 22 answers, which you can read here. At the time, I quickly read through that list of questions and concerns and tried to identify a few to discuss with the full group, but what if I asked ChatGPT for a little help? Just now I took that list of 22 responses and asked ChatGPT to identity the three most common questions and concerns on the list. Here’s what it said:

The three most common concerns expressed in the list are:

1. Over-reliance on AI for Basic Learning: The worry that students using AI for basic tasks might become dependent on it, hindering their ability to progress and develop independent learning skills.
2. Assessment Challenges: Figuring out how to assess student outcomes effectively when traditional methods focus on the product of learning, which may be less reliable with AI usage.
3. Loss of Critical Thinking and Creativity: Concerns about students losing critical thinking skills and creativity, especially in online environments where AI use is less obvious and harder to monitor.

Not bad! Each of those concerns was expressed multiple times by workshop participants. Looking over the full list, however, I see a few other trends not captured in that synthesis. So I asked ChatGPT for three other trends it sees in the list. Here’s the response:

Three additional trends evident in the list are:

1. Equity and Access Issues: Concerns about whether AI technology will widen gaps in learning and tech skills among students, particularly across generational and socioeconomic lines.
2. Ethical and Academic Integrity Considerations: Questions about the ethical use of AI in academic settings, including issues related to academic misconduct, source hallucination, and the potential for AI to produce biased or incorrect data.
3. Need for Institutional Adaptation and Policy Development: The recognition of the need for educational institutions to develop policies and provide support for integrating AI into curricula, as well as establishing guidelines for its appropriate use by both students and faculty.

Once again, ChatGPT provided a useful and accurate answer. There’s one more big trend I see in the full list that hasn’t been captured by ChatGPT and that’s the added difficulty of navigating these problems in online courses, particularly asynchronous online courses. However, I can imagine that coming up in the full-group discussion after we’ve talked about the six trends that the AI identified, if I’m doing a good job as discussion leader.

While I’m hoping that Top Hat or Poll Everywhere or some other response system will start experimenting with this kind of AI assistance, for now I’m not adverse to copying-and-pasting some open-ended responses into a quick ChatGPT query during a live session. If you get the chance to try this out in your teaching or facilitating, please let me know how it goes!

UPDATE 12/1/23: I ran a workshop today at the National Council for the Social Studies conference here in Nashville. During my session, I tried out the AI-enhanced polling idea mentioned above. I asked “What strategies have you found useful for broadening student participation in your courses?” to about 50 psychology teachers, using Poll Everywhere.

I tabbed over to the response history and downloaded the responses as a CSV file. Then I copied and pasted the responses into ChatGPT 4 with the prompt “Tell me what four trends you see among this list of teaching ideas.” ChatGPT gave me four trends (active learning, student engagement, etc). Each identified trend came with a description and examples from the responses. Here’s the output:

In the list of teaching ideas you’ve provided, there appear to be four main trends:

1. Active Learning: Many of the ideas involve students physically moving or interacting with the material or each other, such as in a gallery walk, “Vote with your feet,” or “split the room” debates. Activities that require students to get up from their desks, engage in small group discussions, and use tactile methods for learning fall under this category.

2. Student Engagement: There’s a strong emphasis on making students feel valued and heard, fostering civil discourse, and creating engaging activities. Strategies like think-pair-share, small group discussions, and using gamification techniques are designed to increase student investment in their learning.

3. Use of Technology and Innovative Methods: The incorporation of AI for research, the use of platforms like Padlet, and online discussion boards suggest an integration of technology to enhance learning and discussion.

4. Inclusivity and Diversity of Expression: The ideas listed support diverse methods of expression and inclusivity, such as allowing for written or drawing responses, using whiteboard markers on desks, and varying activities to cater to different learning styles and interests.

I read the ChatGPT output aloud, then asked the participants to put their hands up if their response wasn’t captured by one of the four identified trends. No one did! ChatGPT did a good job of categorizing the 45 responses to my open-ended question. For 15-20 responses, I probably would have just let them scroll on the screen via Poll Everywhere, but for 45 responses, I needed help summarizing them. Some of ChatGPT’s “bucketing” was not great, but good enough for use to guide a class discussion.

Glad to know this is a viable technique!

Leave a Reply

Your email address will not be published. Required fields are marked *