Just a few short months ago, the most advanced AI chatbot available to the public was CleverBot, an AI that would turn out responses closely related to what was inputted without adding much to the conversation. Today, a simple one-sentence prompt can turn out anything from an Olive Garden ad script to an essay nearly fit for turn-in.
Launching on Nov. 30, 2022, OpenAI’s ChatGPT rose from zero daily active users to over a million by Dec. 4, 2022. This prompted other tech giants like Microsoft’s Bing division and Google to put out Bing AI and Google Bard on Feb. 7 and Feb. 6, respectively.
How we got here
First released to the public as ChatGPT 3.5, this version was built on 175 billion parameters, or data points, to turn out answers to the public’s queries. This version was able to take text-only inputs of 3,000 words and respond with text of limited length. It could also receive passable grades on law, business, medical and other college-level exams.
In just three and a half months, OpenAI released ChatGPT 4. This version is able to accept 25,000 word inputs and utilize a trillion parameters to generate your answer. But most importantly, it’s now able to accept inputs like images and sounds. These advancements took the AI language model from just passable with the aforementioned tests to landing it near the top. Remember, this happened in just under four months and still remains at no cost to the personal end-user.
Head of Ferris’ AI program and Dean of the College of Business Dr. Logan Jones has already been dealing with AI and machine learning and its role in academics. He says this is a leap from what we’ve had before.
“If you look at quantitative sources like for math, we’ve been seeing those in academics for a while now,” Jones said. “Students can take a picture of an algebra problem, and it will show them the answer and the work… But this is the first one that I know of that’s at a quality where you could have a machine write you an essay for a class or a cover letter for a job.”
It’s already infiltrated
These tools are in free public beta, accessible to anyone with a Google or Microsoft account. It’s safe to assume that if you haven’t already played with an AI language model yourself, you probably know someone who has.
English professor Dr. Nate Garrelts has studied how AI may impact literature education but wasn’t quite expecting the wave of these AI language models.
“It did become an overnight problem,” Garrelts said. “I did a sabbatical research project in 2018 using AI to study literature. At the time, it wasn’t very good at writing and was really limited to sentiment analysis. But I was shocked at the amount of progress that happened in those intervening five years, how accessible it became to the public immediately and how quickly word spread.”
Garrelts believes he’s already experienced instances of students using AI to assist them in their coursework, causing him to go out and learn more about it and draft a statement on AI for his students in an attempt to dissuade them from using it by explaining its limitations.
“There are a couple of tells,” Garrelts said. “An AI is only knowledgeable about the things it’s been programmed to analyze. So, if a student asks it about something it doesn’t know about, some AI will admit they don’t know… but others will just invent something based on what you asked.”
Garrelts went on to describe how after receiving responses to a lesser-known short story that didn’t quite make sense, he turned to AI and asked the questions he assigned to students. Sure enough, AI turned out 20 possible answers to the question.
He further explained examples of when he asked students to compare two poets of varying difficulty to interpret. Where most students would acknowledge this difficulty and may even admit they are unsure if they’re correct, he was receiving a few perfect, polished textbook-style answers that didn’t match the student they came from.
Some students didn’t even seem to be reading what these AI language models had generated before they turned their responses in, according to College of Arts, Science and Education Dean Dr. Randy Cagle.
“One of the first instances of this [that] was brought to my attention was from a professor who had a student submit something… where they clearly just cut and pasted a response without looking at it,” Cagle said. “It included the sentence ‘As a language model, I do not have personal beliefs…’ just right there as a dead giveaway.”
The Torch also spoke with students who had admitted to using tools like ChatGPT to assist them in coursework. One claimed to have used ChatGPT to speed up the process when making a presentation due to the redundancy they felt in the assignment.
“It wasn’t information I was unfamiliar with,” the student said. “With my workload, I felt it was just easier to have an AI write it for me versus me spending the time trying to word everything correctly.”
The student further discussed that they spoke with their peers and found they spent nearly a third of the time on the presentation as those who did it on their own. They went on to share how little they had to assist the AI to get it to turn out a presentation they were comfortable with.
“There were a few parts where I wish it had gone a bit more in-depth,” the student said, “so I went back and added a few things… but other than that, it hit everything right on the nose.”
They admitted that while their presentation may have lacked a bit of depth compared to their peers, they felt no fear that their instructor would be able to detect they had used ChatGPT to assist them. They said that while they wouldn’t use it on every written assignment, they could see where it could take the monotony out of some work.
Where do we go from here?
Mostly everyone that responded for this story shared a similar sentiment: the pedagogy will shift because of this.
In the short term, some professors are switching up what they give to their students as sources. As seen in his statement on AI, Garrelts has opted to switch up his materials to lesser-known stories so that AI will have less of a chance of being able to assist in student assessments.
In the long term, we just don’t know yet. Both deans and faculty don’t quite know where this will land, especially in a society where the online modality is becoming so prevalent for higher education.
“It’s hard to imagine a case where higher ed is confronted with something that’s requiring potentially such a fundamental shift,” Cagle said. “Online is where many people see the future of education. With traditional college-age enrollment declining, universities want to make that up by using online education to reach adult learners remotely… We’re in a wait-and-see because we just don’t know its extent yet, but we do know we’re also looking at something consequential as well.”
Cagle was skeptical of the alleged AI detectors companies were marketing to universities. He cited their claim to function like current plagiarism detectors where a percentage figure is given that suggests how much it thinks AI wrote. He continued that even those weren’t 100% foolproof and maybe we need to think a bit bigger and step outside the box of detectors.
“The new stuff coming out is going to be trailing behind this for quite a while,” Cagle said. “Where we’re at, I’m not sure if that’s the route we should take just trying to constantly fight this. Maybe we look at another way to assess students to ensure that they’re getting the things they need out of these classes. While we don’t have great tools at this point, we’re not helpless either.”
In the meantime, the common message was nearly universal: if you feel stuck or need help, ask questions and talk to faculty. The university employs these experts in their field for a reason. They suggested that the downside to this is the education the student is missing by not doing the work themselves.
“You get out of your education only and exactly what you put into it,” Cagle said. “These kinds of shortcuts may benefit you in the short term, but they’re dishonest. In the long run, they undermine what you’re out to achieve.”