Artificial intelligence (AI) is rapidly evolving the academic landscape, making classrooms smarter and more adaptive than ever. At the University of New Hampshire (UNH), AI tools are already being integrated across multiple departments.
AI uses machines, specifically computer systems, to simulate human intelligence processes like reasoning and problem-solving. A subcategory of AI is large language models (LLMs), like ChatGPT, which understand and generate text in a human-like fashion. A majority of AI tools being used in higher education are LLMs: ChatGPT, Google Bard and LLaMA by Meta. The models’ abilities to process and respond to text are all similar to each other, so their parent companies are competing to be the most accurate and reliable.
Alex LaBrecque, assistant professor of marketing at UNH, encourages his students to use AI tools like ChatGPT to supplement their thinking.
“It’s a skill that students should learn to develop because, whether we as educators like it or not, it’s going to happen regardless and students are going to be using it in their jobs constantly,” he said.
LaBrecque teaches marketing analytics and digital marketing at Paul College. A lot of what his classes do is case-based; summarizing business issues and using results from data to solve business problems. For his students, AI provides a framework to go off of in terms of explaining results, generating initial ideas on how marketing programs might come about, or fleshing out ideas further, he said.
“[AI tools are] commonly available, just like when I was growing up and going to college, Google Search was available to me. So, I view it as two sides of the same coin; use the materials available to you; it’s kind of how we’ve just evolved through our time,” said LaBrecque.
But, LaBrecque does not suggest that students become reliant on this technology.
“I view it as an enhanced brainstorming tool that can facilitate the writing process, but the majority of the work is being done by students,” he said.
Sam Carton, assistant professor of computer science at UNH, researches ways to make the collaboration between human and AI models “less dangerous and a little more ethical.”
“The problem is that if you do it in a naive way – if you just ask the model for help and just do what it tells you to do – you subject yourself to the fact that these models do make mistakes from time to time. When mistakes do get made, it’s a little bit hard to point a finger at the responsible party in a situation like that,” Carton said.
Instead, Carton suggests the model break down its responses into “bite-sized chunks” that a human can individually verify. The human can make a more reliable call about whether they want to trust the model, and therefore they can be responsible for that choice, he said.
Students should know how to use these tools effectively and, more importantly, when to use them, explained Carton.
“As a student, your job is not to produce good products, it’s to learn,” he said. “If you’re using ChatGPT to write your essays for you, you’re missing the point of writing the essay in the first place which is to learn how to write an essay.”
The university’s position is that, unless explicitly permitted by the instructor of a specific course, the use of AI tools (including ChatGPT and similar programs) is considered a violation of its academic integrity policy, said Dean of Students Michael Blackman.
“Many faculty are thinking about ways to integrate the use of automated writing tools in their courses,” Blackman wrote in an email to The New Hampshire, “and faculty are certainly welcome to do that.”
“It’s becoming part of our world and part of the way we do things,” Carton said. “To learn how to function effectively in the world as it is, you have to learn how to use the tools that dominate that world.”