It’s a Thursday afternoon in Carrie Kell’s “Writing Workshop” course and Anna Boranian ’27 is busy pasting her rhetorical analysis paper into ChatGPT. Clicking on the chatbot’s message bar, she informs the generative AI that she will be asking it questions on the draft she’s provided. Throughout the classroom, Boranian’s peers are tapping their keyboards as they each communicate with their virtual peer review partner.
Kell, who’s addressing questions around the room and reading the advice ChatGPT is giving students, pauses at Boranian’s computer screen.
“What’s ChatGPT telling you?” asks the lecturer and interim coordinator of First-Year Writing.
Boranian shares that the chatbot has pinpointed areas of repetition within her text. Unsure of how to say things differently, Kell suggests Boranian ask ChatGPT what a revision would look like. Boranian’s fingers quickly get moving.
Bryant’s “Writing Workshop,” which is part of the university’s new general education program, helps students strengthen their writing skills and familiarize them with the conventions and rhetorical strategies used for different genres. Due to the growing influence of generative AI in the workplace and public sphere, Kell is educating students on how they can leverage ChatGPT as a collaborative partner during the writing process.
According to Kell, much of the media attention surrounding generative AI focuses on cheating and plagiarism, which diminishes the technology’s value. Marketing, entertainment, and manufacturing are just a few of the many industries using generative AI. According to Forbes, the technology is expected to see an annual growth rate of 37.3 percent from 2023 to 2030, and AI’s market size is expected to reach $407 billion by 2027.
Kell notes that AI literacy is an important skill set and handling the technology in the classroom will teach students how to ethically use it.
“You should never use AI as a producer of text — always as a collaborative partner,” says Kell, who had undergrads use the chatbot to look at their thesis, argument strength, analysis organization, and overall paper effectiveness.
AI can “hallucinate,” which is why Kell doesn’t recommend it for collecting factual information since it isn’t always accurate. Even AI detection systems have low accuracy rates and only do well if something is 100 percent human-written or AI-written, as noted by Terri Hasseler, Ph.D., Center for Teaching Excellence director, at a recent Analytics Without Borders conference held at Bryant in late March.
Chatbots need to be properly queued with specific prompt engineering to give constructive feedback, which is why Kell showed students how to give the chatbot a task, provide context, add a persona, and request the chatbot’s response in a specific format.
Distinct questions could include: What feedback do you have about my thesis statement? Does my essay effectively argue my thesis? What feedback do you have on my use of evidence to support the main ideas in each paragraph?
“Don’t just accept the feedback. You receive it, but it's your job to analyze it,” Kell says.
Taking the skills they’ve developed this semester, students will produce a research paper for their final project where they’ll use generative AI as a collaborative partner during the brainstorming, ideation, research, and review phases. In the meantime, Kell wants students to reflect on their interactions with AI.
“I want you to step back and ask yourself: What was valuable? What was not so helpful? And what was lacking compared to a human peer review?” says Kell, adding that students should still take advantage of Bryant’s Writing Center or find someone to peer review with when the opportunity allows it.