It’s been more than a year since generative artificial intelligence entered the limelight, yet colleges and universities across the country are still scrambling to develop AI policies that work at the institutional, departmental, and course levels. This new technological tool — which creates text, images, videos, and other media in response to inputted prompts — is continuously evolving and is already being employed within numerous industries.
“I’m consistently blown away by how quickly generative AI improves and how many directions it has taken,” says Allison Papini, assistant director and manager of research and instruction services for the Douglas and Judith Krupp Library. “If you had asked me a year ago what I hoped to see in 12 months, my greatest wish would be to eliminate hallucinations and have generative AI provide supporting references. While the technology isn’t totally perfect, these wishes were largely met within months."
In efforts to foster a better learning and working environment for students, faculty, and staff, and avoid reactive responses to the technology, Papini, along with three outside researchers, published a paper in Educause Review where they provide guidance and recommendations for approaching the development of generative AI policy.
A necessary step
ChatGPT, Google’s Bard, and Microsoft’s AI-powered Bing are several of the many generative AI tools that present a variety of possibilities, problems, and paradigms to institutions. In higher education, the technology has already been used for graduation speeches, tutoring, and press releases. According to Papini and colleagues, colleges and universities need to determine where generative AI tools are appropriate and where they represent ethical or legal challenges.
AI-generated plagiarism, and how to detect it, has bloomed into a hot-topic conversation since the technology challenges academic integrity policies. AI writing detectors are available to spot plagiarism; however, false negative and positive projections pose reliability challenges. Additionally, the systems have shown bias by flagging multi-language learners’ work as AI-generated. Because of this, researchers note that students have an increased pressure to defend themselves against a machine that can only show a projection and none of its work.
While plagiarism is just one issue that higher education needs to address when crafting policy, researchers note that other considerations include the role of generative AI for marketing, social media, and reports; how faculty use generative AI in the creation of course content, assignments, and feedback; and the impact on workers needed to run generative AI.
A model that fits institutional needs
Understanding a policy’s audience, timeline, and acceptable use are just three elements institutions must consider when developing a generative AI policy. Researchers note that colleges and universities should take time deciding their goals and agree on specific measurable outcomes.
“Creating a generative AI policy at an institutional level will take some time,” Papini says. “It will take at least a year to have conversations with key stakeholders to create and refine a sustainable policy. Institutions should review the policy regularly while being intentional when making changes. Reviewing the policy once a year or so would be a great first step for many organizations.”
Researchers note that there are five policy development models for institutions to consider. These options include a task force model, governance model, design sprint model, consultant model, and exemplar model. Depending on a college or university’s needs, these models can be mixed and matched.
Including a range of perspectives
Generative AI can reach all areas of an institution, which is why researchers stress the importance of including the voices of students, faculty, staff, and other stakeholders. Involving students is particularly important since they are already engaging with the topic and may tie generative AI into their future employment goals.
After assembling stakeholders, Papini and her fellow researchers suggest colleges and universities consider the following: Will the institution’s upper management use generative AI to survey employees’ work to detect efficiency or generate employee evaluations? How will AI be addressed in human resources? In what ways will the community outside the campus be impacted by policies?
While policy development around generative AI is complicated, initiating a collaborative approach will help inform and better unify an institution around this challenge. As for Bryant, Papini says the university is on the right track.
“We’re one of 19 schools involved in the Making AI Generative for Higher Education project in collaboration with Ithaka S+R, and we have a Bryant team conducting a study to gather information from the Bryant community to make evidence-based policy recommendations to the university,” Papini says. “We’re about six months in with another year and a half to go. Right now, we’re at the tail end of a survey that has been shared with the Bryant community, and our next step will be to conduct interviews with campus stakeholders so we can capture the needs of as many campus groups as possible.”