One brilliant spring day in March, as the sun pours through floor-to-ceiling windows in Bryant’s Quinlan/Brown Academic Innovation Center, a think tank of national experts is shining its own light on the most polarizing topic in higher education today: Artificial intelligence.
The agenda for the seventh annual Analytics Without Borders conference, hosted by Bryant with hundreds of attendees across industry and academia, includes a series of compelling and timely workshops, research presentations, and keynote addresses, including one delivered by Microsoft Education’s Chief Innovation Officer Michael Jabbour. During his talk, Jabbour shares that he recently used AI to create an elementary school schedule that championed flexibility and fluidity. His 45-minute experiment, Jabbour reports, saved the district 10,000 hours of planning time.
The AI anecdote sends murmurs through the crowd, which reverberate two buildings across campus to Lecturer Carrie Kell’s first-year “Writing Workshop” course. Here, students like Anna Boranian ’27 are learning to leverage generative AI platforms, such as ChatGPT, in their academic pursuits. As Boranian pastes her rhetorical analysis paper into ChatGPT’s message bar, she informs the generative AI chatbot that she will ask it a series of questions based on the provided draft.
“What’s ChatGPT telling you?” Kell, interim coordinator of First-Year Writing, asks her student.
The chatbot, Boranian answers, has pinpointed areas of repetition within her text. In response, the first-year student’s fingers get moving to make the changes, and in seconds, her draft is stronger than before.
Since its headline-grabbing debut in the fall of 2022, ChatGPT has stirred a range of responses on college campuses. Some educators see wide avenues of potential in the new technology; others worry that students might compromise learning at the altar of easy answers.
Reactions from Bryant faculty have been similarly mixed, reports Rupendra Paliwal, Ph.D., provost and chief academic officer. And there is some evidence to support their concerns: A January 2024 survey by Intelligent found that more than two-thirds of college respondents used the tool for help with writing assignments; a third of the students used ChatGPT to write entire essays.
At the same time, Paliwal says he has been impressed by how quickly Bryant’s faculty have committed to constructive discussions about how to work with, and leverage, this ground breaking technology — one that will reshape both academia and business forever.
Days after the Analytics Without Borders conference, university leaders released a comprehensive set of guidelines for use of generative AI that emphasizes responsible handling as well as maintaining data privacy and security. The work was informed by a January board retreat — attended by both university leaders and trustees — that included a morning devoted to AI.
Senior Paycor executives, including CEO Raul Villar Jr. ’89, discussed the evolving demands of employers, emphasizing the imperative for the talent pool to adapt, and strategically distinguish itself, in the age of AI. An engineer from IBM Watson filled in the landscape further, sharing research, applications, and how industries are changing in the age of thinking machines.
Bryant cabinet members also offered their own thoughts on AI to the board so there would be “full alignment across the institution,” says Edinaldo Tebaldi, Ph.D., vice president for strategy and institutional effectiveness. “Our board
members had very good questions about what institutions like Bryant should be considering, and how we can strategically position ourselves to respond to AI.”
Built into the new policy is a tacit understanding that a purely punitive approach will not work in the classroom; rather, the focus should be on embracing it responsibly. “We can’t control behavior; it’s better that we learn about how people behave and then respond, rather than try to write policies that are not enforceable,” says Tebaldi.
“I don’t expect us to arrive at answers today; it’s going to be a journey.”
Instead, Assistant Director/Manager of Research and Instruction Services Allison Papini, who has published research on AI and learning and speaks nationally on the subject, encourages “candid conversations about AI, setting clear expectations, and having an open dialogue with students and peers.”
Acknowledging the need for long range strategies, the university is also taking part in a two-year, 18-school project, helmed by research firm Ithaka S+R, that aims to offer guide lines and guardrails around the use of generative AI in higher education.
Bryant leaders recognize that assembling a policy framework now, especially one attempting to negotiate a rapidly changing field, can feel much like flying a plane while building it. “I don’t expect us to arrive at answers today; it’s going to be a journey,” Paliwal says.
The university’s north star, Tebaldi adds, is its emphasis on transformational learning experiences. “With a focus on fostering exceptional outcomes and nurturing passionate, purpose-driven leaders, Bryant is poised to capitalize on AI opportunities, particularly ones that have real-world applications for our students.”
For more than a year, Bryant has been systematically examining a myriad of AI touchpoints from student, faculty, and operational perspectives. A distinct steering committee, focusing on a variety of areas including student success, has been making impressive headway, says Chief Information Officer Chuck LoCurto. Echoing Paliwal, he admits “it’s the kind of race where everybody’s sprinting, but no one knows where the finish line is.”
The key takeaway, says LoCurto, is that AI will impact practically everything at Bryant, from the academic (how to teach and what to teach) to the organizational (how the university can make processes more efficient with AI technologies).
Bryant leaders are currently working to weave AI into the fabric of the school without losing sight of the university’s core values, including a rigorous focus on interdisciplinary education. In this new educational age, this will take root in an AI minor, which will be introduced in the 2024-2025 academic year and will not be housed in any one college or school. “We want it to be interdisciplinary so any student can take it,” Paliwal says.
“The exceptional outcomes we deliver for our students are in our DNA, and we want to preserve that important differentiation of the Bryant education.”
The goal, he suggests, is to offer a nuanced and thorough education in AI. One of the minor’s courses will provide a basic overview of the technology and its evolution; another dives into broad applications with an emphasis on ethical and responsible use; additional electives will focus on AI’s applications in specific disciplines including healthcare, accounting, and other industries. The minor will be supported by the launch of an AI lab in the new Business Entrepreneurship Leadership Center, which will debut during the 2024-2025 academic year.
Paliwal notes that Bryant’s AI emphasis will set students ahead of the curve as they enter their careers, no matter the field. “The exceptional outcomes we deliver for our students are in our DNA, and we want to preserve that important differentiation of the Bryant education,” he states.
Partnerships with industry are helping the university explore this new frontier: With input from professional services firm PricewaterhouseCoopers (PwC), Bryant has launched an Accounting and AI fellowship program. Tech companies such as HP and Microsoft are also providing critical input on how students can use AI to boost their experiential learning takeaways.
The AI focus is showing up in student life, too. The annual App a-Thon, in which students develop and present apps that will improve life on campus, got a makeover as well, says LoCurto. This year, the students took part in a Prompt-a-thon, where the competitors who find the most innovative ways to use generative AI will win.
Gamifying is a strategic way to enable students to grasp AI concepts, adds Papini: “so this is just an extension of that goal.”
RELATED ARTICLE: Students build a better Bryant with AI at first annual Prompt-A-Thon
Back at the Analytics Without Borders conference, Center for Teaching Excellence (CTE) Director Terri Hasseler, Ph.D., and CTE Associate Director of Teaching Support Constanza Bartholomae are running a workshop on how faculty can approach AI detection systems from a teaching and learning perspective.
According to Hasseler, the accuracy of these systems is low, and detectors only do well if something is either 100 percent human-written or 100 percent AI-written. She also notes that the systems have unfairly targeted the speech and writing pat terns of multilingual speakers.
And while some have recommended that students analyze ChatGPT output as part of the curriculum, that doesn’t go far enough, argues Geri Louise Dimas, Ph.D., assistant professor of Data Science. Dimas, who attended Hasseler and Bartholomae’s presentation on AI detection systems in March, started out with a strict no-ChatGPT policy in her courses; she has since softened her stance.
"It’s not just about getting through the class and getting the grade, but really making sure you’re learning, because that’s what’s going to make you successful.”
The workaround: Students must let her know when they use the tool — a rule aligned with Bryant’s institutional generative AI policy, which requires students to disclose AI assistance. This way, she can tell which sections of the lesson she might need to review with students to deepen their understanding of the material.
“We need to make sure that students are taking advantage of a learning experience where making mistakes and not understanding something is actually beneficial,” says Dimas, whose research harnesses machine learning to detect patterns in human trafficking. “It’s not just about getting through the class and getting the grade, but really making sure you’re learning, because that’s what’s going to make you successful.”
Bryant’s educators are keeping these concerns in mind as they unpack new rules for instruction. The CTE is helping faculty think through the various aspects of how AI is used in research and in the classroom, including ethical angles.
This work is supported by a Davis Educational Foundation grant — one of the first such grants the nonprofit committed to AI in higher education, says Edward MacKay, the foundation’s board chair.
“The Davis Educational Foundation supported the Bryant grant request because it engaged faculty in how to effectively use AI to improve teaching and learning at a time when many institutions were struggling — and still are — with this challenge,” MacKay says. “There appears to be genuine, widespread faculty interest in continuing to explore the AI-related possibilities, bringing AI experts into campus convenings and deliberations, and sharing those outcomes with other institutions — initiatives consistent with the foundation’s mission.”
“The key part is recognizing that AI is going to change every discipline and every industry, from accounting to healthcare,” Paliwal says. “So, knowing that, our question to faculty is: ‘How are you going to adapt what you teach, and how you teach?’”
How to assign papers and other at-home coursework is another big question. “The discussions around AI have prompted faculty to take another look at student learning styles and learn from their differences,” Papini notes. It’s each professor’s responsibility, she says, to “look at how you assess students and ask: ‘Does this need to be a paper or a presentation?’ Maybe giving students a couple of different paths they can take to reach the same goal would be wiser.”
In addition to AI’s use in academia, Bryant leaders are evaluating how to apply the technology to internal operations. “As an organization, we want to truly integrate what we’re preaching, so we’re looking at how our processes can be improved in terms of efficiency and experiences,” Paliwal says.
But before AI can be harnessed in this way, the university must clean and sort its data and ensure compliance with privacy and security regulations, LoCurto says — and he’s working on laying the groundwork for AI usage across university operations. For starters, most IT employees will have to take a LinkedIn Learning course on AI; three staffers are also participating in a 14-week program on Azure machine learning tools.
The opportunities for AI to improve the student and faculty experience are endless, Paliwal notes. And with its focus on the forefront of business, Bryant has its aim set on preparing students for a future of infinite possibilities.
“If we do AI right, it should serve as a teaching assistant for every teacher and as a tutor for every student,” he says.
The key, Paliwal suggests, lies in the university’s most important differentiator. Rarely will you see the CIO and the chief academic officer and provost of an institution working closely together to ensure that academic and organizational strategies are aligned, he notes.
“We’re taking an integrated, intentional, and interdisciplinary approach; we’re not working in silos,” he says. “That’s the agility we have at Bryant, and that’s why we will lead the nation on AI.”
To read more about AI efforts at Bryant, visit news.bryant.edu/AI