The Student News Site of Loyola University Maryland

The Greyhound

The Greyhound

The Student News Site of Loyola University Maryland

The Greyhound

AI Seen as Tool and Threat at Loyola

Abby Benner

ChatGPT is just one of the many artificial intelligence platforms that is capable of generating anything from recipes to memos to outlines and college essays. It was launched in 2022 by OpenAI and has gained notoriety in educational settings for students’ abuse of the platform’s abilities to produce inauthentic work. Some Loyola professors see the use of ChatGPT as grounds for plagiarism whereas others consider it a useful tool for the future. 

Loyola has already had to grapple with AI and its effects on academic integrity. The Honor Council Annual Report for the 2022-2023 school year reported that of the 56 hearings held by the Honor Council, 15 involved the use of AI, most commonly ChatGPT.

Mark Lee, the administrative moderator of the Honor Council, explained that although there is no mention of AI, nor are there plans to add provisions for AI usage in the Loyola Honor Code, AI-related incidents of academic dishonesty are still treated the same as any other offense.

“AI doesn’t change the definition of cheating or plagiarism. Someone takes something from AI and there’s nothing cited, and that’s not fair to the reader. Did it come from AI, or did it come from your own brain? That would be plagiarism. We see instances where someone used AI and they shouldn’t have used AI. They weren’t allowed to use AI, that would be cheating,” he said.

Lee also stated that it would be difficult for the Honor Council to create a universal set of standards for the use of AI at Loyola. 

“Since there’s so many different policies, we can’t capture every single professor’s rule in the Honor Code,” said Lee. 

Sophia Strocko

Loyola AI Policy

As AI has grown in accuracy and popularity in recent years, Loyola University Maryland has risen to meet it with policies that regulate the use of AI in academics.

The individualized AI policies of each Loyola professor follow guidance from Loyola Provost Dr. Cheryl Thomas-Moore and the Office of Academic Affairs. A document entitled “AI Guidance Loyola” explained that each department and faculty member should determine their own individual policy on AI and clearly state it in each course syllabus. 

Loyola also featured an article titled “A Jesuit Approach to Artificial Intelligence” in the Fall 2023 issue of the university’s official magazine to share the institution’s stance with current and prospective students, families, and alumni. 


Dr. Sara Collins, a Speech-Language-Hearing professor at Loyola, is one of many professors on campus who sees AI as a useful tool. She has constructed her curriculum for students to have the option to work with ChatGPT and learn the ins and outs of the new tool. 

“I give them the choice. You can use it and show me you know how to use it. Use this valuable time to get practice and feedback, but you don’t have to use it,” she said. 

Collins aims to guide her students through the complex world of AI while still learning in a traditional classroom setting. For example, she encourages her students to utilize ChatGPT for help with creating outlines to easily organize responses within discussion forums.

“As critical learners, as a student in this field that is growing so wide, it is my role and obligation to make sure that my student feels responsible using it as responsible consumers of AI and are just knowledgeable about the things that it can help them with,” she said.

Dr. Joshua Smith, a professor of Educational Psychology at Loyola, is also a proponent of the use of AI in the classroom. In his syllabi, he explains to students that the use of AI is not just allowed, but encouraged in his classes. 

“I tell them to use it when they need to ask a question, when they can’t find something in the textbook or in the other readings, or if they’re not sure if what they think about a particular topic is quite right. Start with asking good questions. When they use that source in part of their paper, as part of their references they write a little sentence saying, ‘on page two, in this section, I used ChatGPT,’” Smith said.

Smith views AI as not just a tool students can use to supplement their education, but also one that can enhance it. 

“There are certain things that they don’t know, and they’re not going to be able to figure out on their own, and that will really accelerate their learning of a particular issue if they go use [AI]. I would argue that AI is probably better for some of the faculty’s knowledge on these topics because it’s more contemporary and it’s gathering lots of different scholars and ideas, not just one person’s perspective,” he said.

When looking towards the future of AI and academics, Smith sees the blending of the two as an inevitability. 

“AI is going to be part of every living, breathing action we do in the future, and so how could we exclude it from a very important part of child or adolescent development, of adult education? I bet you in five years from now every single student and every single faculty member [will be] using it for something,” he said.


Luci Fiorini ‘26 has found that most of her professors are taking a more restrictive stance on the technology. 

“Most of my classes are not very supportive of AI. In general, my professors’ policies have been pretty straightforward, because most people say don’t use it, period,” said Fiorini.

Bret Davis, a professor of philosophy at Loyola, finds that the use of AI in classes holds little benefit for students.

“If we allow students to use ChatGPT to write their papers, we will be amputating the final and crucial step in the learning process. After reading and discussing texts, the difficult process of brainstorming, outlining, drafting, revising, and proofreading that they go through to write papers is indispensable to thoroughly understand, critically think about, and creatively and compellingly respond to the material,” Davis said.

Davis prohibits the use of ChatGPT in his curriculum, although he is aware of how the technology is transforming the educational landscape. He worries AI will stunt students’ education. 

“In the case of ChatGPT, I am interested to learn what the advantages will be, other than convenience. As an educator, my attention is, at the moment, drawn to the huge negative impacts it can—and likely will—have on a core component of education: the cultivation of students’ ability to research and write,” he said. 

Another professor who has chosen to ban AI in classwork is Dr. Marie Heath, a professor of Educational Technology at Loyola. Although Heath understands the benefits that students and other professors could see in using AI as a tool to help generate ideas and verify thinking, she believes that the negatives far outweigh the positives. 

Heath’s objection to the technology centers around the ethical and privacy issues raised by AI. She said that AI has already done some real harm, and she does not want to subject her students to the same threats. One instance that Heath cited was the use of AI-generated algorithms to identify criminal suspects, which has led to the false arrest of two Black men in Detroit due to biases in the algorithms

“I guess my reservation is less about students cheating in class and more about, ‘Hey, I don’t think this technology is, as it is created right now, good for society.’ I think it’s harmful to society, so I don’t think that using it is worth the cost to our society as it is right now,” Heath said. 

Heath also acknowledged that there is potential for the companies that run AI software to benefit financially from collecting the data of students using the technology. 

“I don’t want a company to profit off of my students because I asked them to use it, or even told them it was okay to use it, when I’m not convinced that it’s ok for their data to be hoovered out and sold back to them and their ideas to be taken by a company and then used for profit, and the student doesn’t profit from it,” she said.

Sophia Strocko

According to Loyola’s administration, “Our identity as a liberal arts institution steeped in the Catholic, Jesuit educational tradition well positions us to recognize the educational opportunity and possible challenges of classroom use of generative AI.”

Despite differences in policies on and views of AI in academics, Loyola professors and faculty have been united in their message to use the new technology with caution. 

“My advice is to be really careful when using AI to make sure what the rules of the assignment are,” said Lee, warning students against accidentally violating academic integrity policies while using AI technology. 

Heath echoed Lee’s sentiment of caution while explaining how she thinks members of the Loyola community should stop to ponder the positives and negatives of AI before using it.

“I would like to see Loyola come up with questions for faculty to consider and for students to consider to just kind of pause and think, is this the right thing? Is this the right technology to use, and do the benefits outweigh the harms?” said Heath. 

Click here for more information about the use of AI in academics.

Leave a Comment
More to Discover

Comments (0)

All The Greyhound Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *