Students and faculty grapple with AI and academic integrity
Marley Stevens had no idea that using Grammarly could put her on academic probation.
A student at the University of Georgia North, where Grammarly is provided for free to all students, Stevens had submitted a paper for a class on criminal justice when she received an email notifying her of a zero grade.
“I thought he had sent the email to the wrong person because I worked super hard on my paper,” Stevens said in an interview with NewsNation in March. She is now expected to be on probation until February 2025.
I came across this story after I saw a similar case on a MacEwan Facebook group, where a student had been accused of using AI in their first-year biology class (but only used Grammarly). While the punishment was less draconian, the student was still made to rewrite the assignment.
Then, I saw another similar case where a student was using an AI checker in a group assignment, and it flagged their partner’s work. Commenters advised the poster to be wary of the AI Checker’s results.
I kept looking at other stories on student forums, faculty discussion boards, and university subreddits. I then saw a KPMG poll from last year, which found that over half of university-aged students were using AI, and more than 60 considered that to be cheating. I realized a pattern was forming.
After almost two years with ChatGPT and other sophisticated generative AI, students and faculty are struggling to grapple with academic integrity and artificial intelligence.
The Cost of Cheating
Imagine you are in class, working away on an assignment that requires hours and hours of work to get right. You have to somehow churn out 5,000 words on a broad, wispy topic that you only kind of understand.
So, you put your head down and dig into the work. Ultimately, you come up with something that isn’t great, but works, and you feel proud about how you persevered and learned a lot about the concept and your own abilities.
After hitting “submit” on Meskanas, you meet with some classmates for a pint at Towers to celebrate being done. Everyone chats about the assignment, and you kick back and enjoy the vibes, but suddenly, you overhear:
“Yeah, I actually had ChatGPT write it for me.”
Suddenly, that work feels meaningless.
“It affects [students’] motivation to learn and put time into studying if they see that other people are not putting in the time and are just going to use whatever means it takes for them to get good grades,” says Paul Sopcak, a professor at MacEwan and coordinator with the academic integrity office, referring to the use of using AI to gain an unfair advantage.
Darcy Hoogers, the vice president academic at SAMU, agrees from personal experience that “there is a degree of frustration,” from seeing malicious use of AI in the classroom.
But there are other costs, too.
“Grammarly, without the generative AI piece, which is coming, could be considered academic misconduct depending on what the learning outcomes are,”
Paul Sopcak, MacEwan professor and coordinator with the Academic Integrity Office
For example, your degree is only as worth as much as how it’s perceived. Academic institutions that pump out graduates without ensuring the credentials are earned properly develop poor reputations. You may have heard the term “diploma mill”.
“We want to ensure that MacEwan degrees are worth something,” Sopcak says.
But when it comes to weeding out AI cheaters, MacEwan places a great deal of the onus on instructors to report and provide proof that cheating has occurred.
While Sopcak says he sympathizes with the extra work this has put on instructors, he disagrees with profs who don’t follow up on their suspicions of AI or fail to report on them.
“If they have a suspicion, yes, they should follow the procedure that’s laid out. It’s actually not that they should. It’s in their contract,” Sopcak says, referring to the procedure for reporting academic misconduct.
When I asked Hoogers about this, he agreed with Sopcak.
“We pay a lot of tuition to ensure this fairness is achieved and maintained,” Hoogers says.
“I think it’s fair to ask the institution, professors included, to deliver a fair product to students.”
The Same Black Box
According to the Academic Integrity Office, misuse of artificial intelligence was the number one reported case of academic misconduct this last academic year, dethroning plagiarism for the first time.
It’s become a major threat to academic integrity, and while the institution doesn’t recommend it, some profs have been turning to AI for help.
AI checkers powered by the same large language models used by ChatGPT came online shortly after GPT3.5 was introduced to the masses, followed by an improved GPT4 language model that brought AI chatbots to even greater levels of linguistic prowess.
It turns the issue into a ladder-versus-wall game. The walls that protect academic integrity are besieged by taller ladders – more technologically sophisticated cheating methods – which in turn force the walls to be built higher but more precariously with AI tools of their own.
“We want to ensure that MacEwan degrees are worth something.”
Paul Sopcak
On a Turnitin forum post from about a year ago (active up until just one month before writing), profs across North America decried how the AI plagiarism detector feature of the app was turning up false positives, especially when students claimed they were using Grammarly.
“I submitted student work, and Turnitin gave it a fairly low similarity score but 100% AI score. The student told me that he had Grammarly edit his work,” one comment said.
Part of the issue with generative AI is that it’s prone to hallucinate things that aren’t there or true. For example, I learned from ChatGPT recently that my grandfather, a newspaper publisher in the ‘80s and ‘90s, died on January 13, 2024, in Toronto, Ontario. In reality, he’s currently alive and well, living in Edmonton.
Similar hallucinations happen with AI checkers, like those seen with Turnitin’s mysterious AI checker or other checkers, which transparently claim the GPT4 language model. Some tools used to check for AI writing are even biased in flagging writing from people with English as a second language as AI.
When asked about AI checkers, Sopcak says it’s all “the same black box.”
“We don’t know how a decision is reached, and we know it’s highly inaccurate or can be.”
Already, Western University in Ontario and a number of American colleges and universities that provide institutional Turnitin accounts to their faculty have disabled the AI-checking function to dissuade professors from relying on it too much.
“MacEwan doesn’t sanction any use of these tools officially, but it also doesn’t forbid faculty members to use them,” Sopcak says. “It just warns them against some of the problems with these tools.”
In fact, MacEwan’s policy states explicitly that AI cannot be relied on to make a final decision.
When I ask Hoogers about issues with AI checkers, he points out there are more issues to consider than just fairness.
“There are certainly some concerns with the AI checker tools in terms of privacy issues and where the data is being stored.”
When a student’s work goes into an AI checker, it doesn’t just “disappear” once it’s done being checked, Hoogers says. The companies providing the tools could store all that data, including any pertinent information about the author.
In fact, privacy and ethical worries might be some of AI and academia’s greatest issues.
The Ethics Behind the AI Question
In an article for Dataversity, AI ethicist Katrina Ingram says, “There are ethical issues involving generative AI that we, as end-users, cannot address.”
When we talk about AI, it’s often lumped into a massive umbrella encompassing the next wave of advanced computing systems. However, different AI tools are made for different reasons and should be considered differently as well.
Regarding many of these market-made Generative AI tools, Ingram says we should be questioning if they align with our values and goals.
When I met Ingram for an interview, she talked about some major issues that come into play with generative AI tools like ChatGPT and Grammarly. One is privacy, and the other is copyright and ownership.
The big generative AI systems can only exist because mountains of data and copyrighted work have been used to train these large language models (LLM). Who owns the words going into the models isn’t always clear, and it still remains uncertain how ownership of those words works after they’ve been consumed by the LLM.
“There is a great deal of frustration.”
Darcy Hoogers, Vice President Academic, SAMU
The other issue, like what Hoogers was talking about, is privacy. You don’t know how these companies with market-focused motivations will use the data you input into their systems. In an article for The Hill, Ingram says that AI could force us to radically rethink privacy laws as we know them.
On top of this, there are considerable issues with bias from several possible places. MacEwan’s Student Guide to AI describes how bias can be fed into an AI through biased data, a biased designer, or simply through how the model chooses to interpret that data.
AI checkers are another example of this, as they were found to be biased towards flagging papers written by authors whose second language is English.
With all of these considerations, when we look at Generative AI tools, whether ChatGPT or even the new GenAI functions coming to Grammarly, maybe we need to ask ourselves on a fundamental level — do they align with what we are trying to accomplish with Academic Integrity? Does academic integrity align with what we try to accomplish with university education?
AI Problems; Humans Solutions
If a student wants to use AI in any of Professor Iain MacPherson’s classes at MacEwan, they first need to approach him in advance to get permission and discuss how it’ll be used.
This is because MacPherson has adopted one of the several “boilerplate” policies laid out by MacEwan’s Centre for Teaching and Learning (CTL). Other profs allow students to use it with disclosure, and some may not permit it all, but the CTL urges most instructors at MacEwan to give educational AI a chance.
“I’m fine with it. In fact, I know some professors who use it as a teaching aid,” MacPherson says.
Some professors, MacPherson says, will have students critique the writing that large language models will produce, which he calls “always sort of rubbish writing.” Others may get students to try to synthesize something from ChatGPT.
“We don’t know how a decision is reached, and we know it’s highly inaccurate or can be,”
Paul Sopcak
Regardless of which of MacEwan’s policies instructors take on, it’s up to them to let students know what’s up — via course outlines and clear discussions with their classes. It’s up to students to follow up and clarify if they are considering using any AI tools. This includes other AI-powered learning aids, like Grammarly and The Quote Bot, not just ChatGPT.
“That’s where it’s really important that faculty members are super clear about their expectations and the limitations that they set on the allowable use of Grammarly and other tools,” Sopcak says.
Simply put, your professor should be communicating with you, and you should communicate with them. Most people I spoke to for this story will tell you that ensuring these conversations are happening is one of the best solutions for dealing with AI.
Even when it comes to academic misconduct, it’s not guaranteed that first-time offenders misusing AI will receive any blemishes on their transcripts, but it always depends on context. Paul Sopcak says that MacEwan has been moving towards a teaching and learning-centred approach for some time.
“The incident report to the academic integrity office really is just a record of a conversation that took place.”
But while the solutions to AI and academic integrity may be more human than we think, Sopcak admits he’s worried about sustaining the relationships between professors and students as the campus grows.
“Long-term, universities, not just MacEwan, need to think about quality, not just quantity,” Sopcak says.
To learn more about AI literacy, check out MacEwan’s student guide.
Graphic by Forrester Toews
0 Comments