A Bigger, Badder Clippy: Enhancing Student Learning with AI Writing Tools

A digital image generated by DALL-E using the prompt, "a-heavily-armed-spaceship-in-a-laser-battle-with-a-giant-paperclip"On December 15th, educator and futurist Bryan Alexander hosted an edition of his Future Trends Forum focused on ChatGPT and other AI (artificial intelligence) writing generators and their potential impact on education. I missed that live session, but apparently the discussion was so robust that Bryan scheduled a part two for December 22nd. I joined that session on a very cold Thursday afternoon, and I wanted to share a few highlights and observations here on the blog. If what follows interests you, you can watch the entire recorded session here.

One of the speakers was Barry Burkett, head of Sikanai, a start-up developing AI-powered tools to detect and prevent ghostwriting. Their flagship tool is Auth+, which apparently quizzes students about their submitted writing. If students can correctly answer questions about their writing style, the content of their writing, and their memory of their submission, that’s evidence that the student wrote the piece themselves, instead of having a human or AI ghostwriter.

I don’t know what I think about this. On the one hand, it makes sense that a student who goes through a writing process themselves would be able to answer questions about that writing process, so this tool probably “works” on face value. On the other hand, it feels like a surveillance and compliance tool, which makes me uneasy. In my writing seminars, I would rather have students document their writing process for me and for them, as a way to make sure they’re following some process and as a way to help them reflect on and refine their writing process. I’m not sure Auth+ does much to help students to learn to write, but maybe that’s in there somewhere. Regardless, I’m glad to have Auth+ on my radar.

Another speaker was Caroline Coward, a librarian at NASA’s Jet Propulsion Lab. I had no idea JPL had librarians, but it makes sense. What a cool job! Caroline posed some important questions about the ethics of AI writing generators like ChatGPT. How are these systems developed? What biases are built in? How can we find answers to these questions, and what do we do with those answers? These are important questions, and we need to engage our students in asking these questions. As I wrote in my last blog post about AI tools and teaching, “We are going to have to start teaching our students how AI generation tools work.”

To that end, there was lively discussion about when and how ChatGPT gets things “wrong.” ChatGPT is basically scraping the internet for information, which means the “answers” it provides are sometimes no more accurate than, well, the internet at large. How can we help students understand the limitations of the tool? One activity I’d like to try with students is to have students ask ChatGPT about something (sports, a hobby) the student knows really well. This would, hopefully, help students appreciate what ChatGPT can and can’t do for them. Do bear in mind, however, that right now ChatGPT is free to use because the company behind it, OpenAI, is making use of people’s interactions with ChatGPT for their own research. Helping students understand the tool in this way is thus feeding the tool, as well.

Matt Kaplan, executive director of the CRLT at the University of Michigan, suggested another student activity in the chat that I thought was interesting: have students annotate the output from a ChatGPT session collaboratively, as way to analyze and discuss the tool and what it does. This activity has the advantage of only needing one set of output from ChatGPT for an entire group of students. You might open up a ChatGPT session during class and have students suggest questions for the tool to respond to, based on the course material of the day. Then copy and paste that into a Google Doc or Perusall course and have students annotate it together. Alternatively, generate the ChatGPT output yourself and have students read and annotate it alongside more traditional readings for the week.

Third on the “stage” at the Future Trends Forum was Lee Skallerup Bessette, assistant director for digital learning at CNDLS at Georgetown University. Lee has put together a fantastic Zotero collection on resources for AI and teaching, and she shared a few thoughts on AI writing generators were in line with my second point about AI and teaching from my last blog post: When used intentionally, AI tools can augment and enhance student learning, even towards traditional learning goals. For instance, she mentioned students with ADHD, who often have trouble starting writing projects. Could they use a tool like ChatGPT to generate a first draft as a launching point for their writing? That could be really useful. Or consider English language learners, who could use ChatGPT to practice conversation or help them compose in English. Not all instructors would be comfortable with this, but saying to a student “your English can’t be this good” would be problematic for a number of reasons, Lee pointed out.

Lee also noted that when a task is algorithmic in nature, “the algorithms will eventually do it better.” If your ad copy is basically Mad Libs, then an AI tool that does Mad Libs well will write better ad copy than you. Forum participant George Veletsianos, professor of learning and technology at Royal Roads University, chimed in here. If higher education is preparing students for formulaic thinking and writing and work and ChatGPT does that kind of work better, that’s more a reflection of the shortcomings of higher ed than problems with ChatGPT. These comments are consistent with my third point from that blog post: We will need to update our learning goals for students in light of new AI tools, and that can be a good thing.

Here’s one way to think about changing learning goals in light of AI tools. It’s from Lee again, and it melted my brain a little. ChatGPT and similar tools often produce pretty rough responses when first prompted, but one can often refine one’s prompts with these tools to generate better responses. Someone who knows a subject well can probably ask a series of prompts that get better and better at producing expected outputs from the AI writing generator. So, here’s the brain melty part: If a student can coach an AI writing generator to produce high-quality output, is that perhaps a sign that the student really understands the content in question? That is, could we use the back-and-forth between a student and an AI tool as an assessment of student learning?

I really like thinking about ChatGPT as the audience for a student. That’s a potentially great way to put the student in a different position with respect to authority on a subject. Instead of looking to ChatGPT as the authority, the student becomes an authority, questioning this new audience about a topic. This is a really useful framing for a whole set of student activities and assignments. I need to learn more about how conversations with ChatGPT work so I can scope out AI-as-audience activities! If you have ideas here, I’d love to hear them.

The Forum’s fourth speaker was Brent Anders, director of institutional research at the American University of Armenia. Brent let us know that the next version of ChatGPT is coming soon and it will be “way better.” So there’s that. He also asked this really good question: Do we need to change our definitions of plagiarism? If a student uses ChatGPT as an aid in essay writing, does that constitute plagiarism? The words aren’t necessarily the student’s, but they’re not necessarily someone else’s words, either. If using ChatGPT constitutes plagiarism, what about a less generative tool like Grammarly? Or Microsoft Word’s spellchecker? Where do we draw the line? Is ChatGPT, as Lee Skallerup Bissette quipped, a “bigger, badder Clippy?”

Caroline Coward, the JPL librarian, reminded me that there’s a movement in science now toward transparency, even going so far as to post one’s research questions and designs in advance of conducting the research. (There’s a term for this that escapes me at the moment, but I bet one of you will remind me.) Might we see something similar with AI tools, where researchers cite and acknowledge their use of these tools? Citing one’s sources is one way to avoid plagiarism, so might we ask students to be transparent when they’ve used a “bigger, badder Clippy”?

These last points, along with the Auth+ example from the top, don’t quite fit within the three organizing principles I floated in my last blog post on this topic, so I’m going to float a fourth principle here. Let’s try a positive framing: New AI tools will require a rethinking of community norms and expectations around academic integrity. How does that strike you? It poses more of a question than it provides an answer, but I think it points to something important about community norms.

Leave a Reply

Your email address will not be published. Required fields are marked *