Decades ago, artificial intelligence was something for sci-fi novels: an imagined future where robots with humanlike intelligence would clean our houses, cook our meals, and otherwise set us up for a life of leisure. A life where we could pursue our passions–write, make art, play music, whatever we liked.
Cut to 2025 and the future is here. Rather than AI bots that glide around dusting our homes a la Rosie in the Jetsons, our AI is largely acting behind the scenes.
When most people discuss AI today, they’re talking about generative AI, and specifically, large language models like ChatGPT, Bard, and LLaMA. This type of AI–though its uses are still controversial in the space–is extremely prevalent among both students and staff in higher education, including at Baylor.
How AI is Being Used
“As far as I know, the students are using [AI] for all kinds of things. Now if the question is, ‘What are they allowed to use it for?’ That’s a different question.” – Dr. Pablo Rivas
How It’s Being Used by Students
According to Baylor professors, it seems students in higher education are using AI for a little bit of everything: checking their homework, writing their homework, writing code, generating images, videos, and music. Its use is especially prevalent in assignments that require a lot of writing, says Dr. Andrew Freeman, assistant professor of computer science. He points out that for assignments in subjects like math, it’s pretty useless. “They’re quite bad at math, actually,” he notes.
Some schools are leaning heavily into AI, encouraging their students to use it. Others are more conservative. Baylor seems to be somewhere in the middle, allowing each professor to come up with their own rules about AI. But make no mistake–students that use AI when it’s not permitted for an assignment risk academic dishonesty.
How It’s Being Used by Professors
Many Baylor professors have jumped on the AI bandwagon to help with simple, repetitive tasks. According to Assistant Professor Dr. Pablo Rivas, that’s really AI’s sweet spot right now.
“Grading is one of those [tasks],” he says, as professors at some schools are using AI to automate the time-consuming task of grading.
Other professors use it for things like calendar creation, summarizing notes, or in some cases, creating course materials. This last one recently got a professor at Northeastern University into some hot water, when a student noticed the professor had used ChatGPT to generate course materials, while the students themselves were not allowed to use the LLM.
Clearly the landscape around AI usage is nuanced and complicated, due in part to the fact that it’s changing so quickly. No matter their stance on AI, it certainly has professors rethinking how they’re teaching.
“I’ve had different policies [over time] myself,” says Dr. Kara Poe Alexander, managing director for Baylor’s Center for Writing Excellence. “I’ve gone from no AI use to, ‘You can use AI in this assignment,’ or, ‘You have to tell me how you’re using AI when you do use it.’ Because I teach writing, and I want them to grow as writers.”
For professors like Rivas and Freeman, who teach classes in computer science rather than writing, the rules will naturally look a little different. That said, even these professors at the forefront of AI research are quick to point out its drawbacks among its advantages.
The Good News
“If you’re using it as a tool, you can do more. And that’s great for society, I think.” – Dr. Andrew Freeman
It Can Help with Productivity
As Dr. Freeman says, when used as a tool, AI can help students and staff be more efficient and productive. He clarifies, though, that it should be seen as just one tool in the toolbelt. “If you’re coming to depend on it to do your work for you, that’s when it becomes a problem.”
It’s Great at Summarizing
A recent New York Times article discusses how AI is getting more powerful, and at the same time its “hallucinations” are getting worse. When AI hallucinates, it is essentially making things up with a whole lot of confidence.
This happens when someone asks AI a question without feeding it any information. If, however, a student were to paste all of the notes that they themselves took, and asked AI to summarize the notes or create quiz questions, it should do a great job at this, without room to make anything up.
The Bad News
“The more I used it, the more I started doubting myself as a writer and a thinker.” – Dr. Kara Poe Alexander
It Impacts Critical Thinking Abilities
A recent study from Microsoft looked at the cognitive effects of AI on over 300 workers and found that across the board, the more confident a worker was in AI, the less critical thinking they displayed.
Another study from June 2025 looked specifically at students in higher education and found that those using LLMs displayed a “cognitive debt” compared to the other students.
Dr. Rivas, who holds research interests in responsible AI and standards of AI ethics, is well aware of this study’s implications at Baylor.
“This is what keeps me up at night–the kind of students we’re going to get soon,” he says. “If they have been using ChatGPT or other AI tools, what parts of their brains have not developed enough? What challenges are we going to have with them?”
It’s Easy to Become Reliant on AI
Dr. Alexander has several students who use AI to proofread or edit their writing. The more they use it, the more they seem to rely on it as an integral part of their writing process.
Alexander herself talks about using AI to proofread her own work but says it certainly comes at a cost. “The more I used it, the more I started doubting myself as a writer and a thinker,” she says, and has ultimately decided to scale it back to a minimum.
It Keeps Your Data
“In general, Baylor currently discourages the use of AI on systems that Baylor doesn’t host because of all the known privacy issues,” says Dr. Rivas. This means when students or staff enter any personal information into an LLM, the LLM is storing that data.
In fact, here’s what ChatGPT has to say when asked, “Does ChatGPT store my data?”:
“OpenAI (the company behind ChatGPT) does collect and store user data, including your prompts, conversation history, and related metadata like device type, IP, timestamps, and (if provided) account details.”
It’s Not Actually Very Good Yet
Even though ChatGPT argues, “ChatGPT is very capable,” the experts at Baylor researching generative AI seem to have a different perspective.
In fact, Dr. Rivas often uses AI in his classes to show students how bad it is at some of the simplest tasks. One of his favorite exercises is to simply ask AI, “How many ‘R’s are in the word ‘strawberry’?” He says the AI will confidently, and without hesitation tell the user there are two Rs in the word.
It Can Lead to a Lack of Connection
According to Dr. Alexander, there has been over the past couple of years a decrease in the number of students visiting writing centers on campuses across the nation. She believes this is directly tied to students relying on AI to guide them in their writing rather than real people, and this worries her.
“AI feedback is not at all the same as the feedback from a writing center consultant,” she laments. “With a human, you get that connection and emotional support, and with AI, you just don’t.”
How Baylor Hopes to Help
“Baylor has reached a good balance in demonstrating that we’re thinking about these things and actively investigating AI… and at the same time, allowing professors to maintain their academic freedom.” – Dr. Andrew Freeman
It seems there is a lot to be wary of when it comes to AI in higher education, but the truth is, it’s new enough that we don’t understand the implications long term. The University aims to stay on the forefront of the AI issue, pondering its ethics in higher education (and beyond), hiring more experts on the subject, and education current faculty on its uses.
“As far as I know there is an AI committee composed of all the deans and very smart, important people who have a lot of decision-making power … that are investigating the safest, most responsible ways of using AI,” says Dr. Rivas. He adds the faculty have several groups amongst themselves, including the Academy for Teaching and Learning, the Baylor Ethics Initiative, and Dr. Rivas’ Baylor Responsible AI, all of which help create a future for students and staff where AI is used ethically and intelligently.
In fact, at the end of July, Dr. Rivas got some exciting news: Baylor, in partnership with Rutgers University, Ohio State, and Northeastern University, has received a grant to establish the only research center for responsible AI in the U.S. that is vetted by the National Science Foundation.
“This is going to position Baylor as a top place in the nation where people can come to do responsible AI,” he says. “And that’s exciting.”
What Comes Next?
“You can’t bury your head in the sand anymore. You really have to educate yourself, because otherwise you’re just going to get plowed over and left behind.” – Dr. Kara Poe Alexander
AI is moving fast. Like, really fast. Even Dr. Rivas–a researcher at the forefront of the field–admits he’s feeling overwhelmed by how quickly things are moving in this field.
Not so long ago, Rivas imagined a world where universities could assign a personalized chat bot to each student, a sort of personalized teaching assistant to help them with their work. Teachers could monitor it and get a sense of how their students were doing.
And Google just released such a system.
As it’s clear the future is here now, Baylor has plans to keep researching AI’s development, and eventually create some sort of AI major, says Dr. Rivas.
Dr. Freeman points out that it could be a good idea to have some required class for all freshmen where the basics of AI are taught; but this would only be useful in the short term.
“Eventually, we’ll have students who have had these tools their entire life–and they’re going to be experts. We’re just at a weird crossroads right now, where most people don’t understand these tools,” he says, “and that’s kind of dangerous.”