
“Sure! Here’s what you need to know.” Many college students have encountered these lines at some point in their academic career; it is a line that professors are less-than-pleased to see on a Brightspace submission. Large language models (LLMs) have seen a surge in adoption in student work since the introduction of GPT-3.5 back in 2022 by OpenAI. This rising trend in artificial intelligence (AI) usage has completely changed the relationship students have with their work, allowing them to generate practice questions, notes and homework answers at the click of a button.
Likewise, the introduction of AI has stirred discussion in the arts and general humanities. Language and image models have been able to create artwork that resembles something constructed by an actual human, imploring us to redefine our relationship with the work we engage with and the hands and minds that generate art. At the intersection of these conversations lies the role the academic spaces play in cultivating creative minds within the backdrop of AI’s emerging presence.
“If I see something [in the arts] that is made by AI, I don’t ask the question that art prompts us to ask, which is ‘what was going on in this person’s head?’” Jean Elyse Graham, a professor in Stony Brook University’s English Department, said.
Graham teaches literature and computer science courses as well as participates in digital humanities research. In her class, PHI 201 (Demons to Think With), which is co-taught by Professor Robert Crease, she uses philosophical concepts as guiding principles to understand technological advancement. Graham urges the students, as consumers of art, to examine the background of media we find value in and engage with — particularly as a mode of interpersonal expression.
“We have an urge to understand minds that are not our own, even though AI at present doesn’t think,” she said. “It’s not sentient. But it tricks us into thinking that it does.”
Art is a deeply personal form of expression that individuals can make because of, and not despite, the limitations of both time and imperfection. The cultural adoption of AI artwork, whose substance as legitimate “art” is a realm of philosophical debate, shifts the playing field immensely. The engagement becomes less about understanding intentional communication and more about consuming an end product made for efficient delivery.
“If you put googly eyes on any inanimate object, people will project feelings onto it,” Graham said.

In essence, art is not just an end product whose value is determined by being but rather the purposeful crafting of cultivating one’s thoughts, values and judgment calls. Graham argues that this makes the work itself engaging.
Karen Levitov, the director and curator of Stony Brook’s Paul W. Zuccaire Gallery, said empathizing is essential to critically engaging with art on an earnest level and it is not something that we can achieve with data alone. Levitov prompted this question: what does this mean for students — and some artists themselves — who intend to incorporate these generative processes into their work?
Levitov hopes to embrace the natural progression of art and the processes with which artists make it find its way more into the modern world.
“Art is always changing with new technology,” Levitov said. “Art is always affected by technology, and whether that technology is cameras or computers. It’s always changing and it’s always going to continue to change. So generative AI work is new, but it’s not unexpected.”
What Levitov speaks to is the discussion regarding the relationship between generative AI and the arts: whether or not the presence of the former will outpace the creative output of humans. In a world that is trending towards automation in the AI zeitgeist, it poses an existential question about whether or not the art that we demand still needs the human element to hold our attention. And, in Levitov’s view, there is a potentiality for our work to be elevated, rather than be diminished, by the new possibilities that emerging technologies open us up to.
“I’m really interested to learn from [artists using AI as multimedia], about how they’re using this technology in their work, and I think it’s really fascinating,” Levitov said.

To reconcile these different worldviews is to achieve a better understanding of progress. AI is still in an infant state; to fully appreciate its effect on the subsections of our culture will take both time and careful examination. To treat AI firmly as a tool and not creator makes possible work that can be built with intent yet retains the creative benefits of having LLMs generate ideas or refine existing ones.
Taking advantage of these tools would also come the acknowledgement of the invisible engineers who made it: the artists whose work feeds into the computer. Due to the fundamental nature of machine learning — to determine basic rules through a comparison of inputs and outputs — there needs to be training data to work with. And this data comes from the work of artists who often aren’t compensated, informed or acknowledged when it happens.
Jennifer Epstein, a novelist and visiting writer at Stony Brook, said she hopes that individuals can hold themselves accountable to the tools at their disposal. Her work was found to be illegally used in the training data for Meta AI’s product Llama 3.
“I think there’s some real issues with the ethics of how it has come to be as good as it is,” Epstein said. “But more importantly, I just think that it really shuts off creative pathways and creative growth in the mind. You know, we learn to create by creating.”
There is dignity in learning to walk the walk, and the risk of AI is having it assume the reins over all aspects of the process. This does not just apply to artists, but to all tasks assigned to the LLM. There is always the risk that what is initially an inconvenience becomes a maladaptation. Central to creating art — or any form of learning — is the human mind through which both processes are facilitated. The ability to synthesize information, to recognize gaps in our knowledge and to communicate with peers are ways in which we improve our mental capacities. There exists a way for technology to serve this ability, but it is a narrow path forward, and we have to be careful not to dismiss the potentialities outright yet not to fall into outright complacency.
The development of new LLMs like ChatGPT is derived from the ambiguous discussion about the problems they aim to solve in our lives and the specific ways they are meant to intersect with existing needs and processes. With AI in the arts and humanities, it begs an interrogation of our values: what parts of our own creativity and analysis do we think are worth sacrificing?

These conversations are new, and the ambiguity of perspectives can only be reconciled when we, as students, artists and societal participants, continue engaging with them.
This past fall semester has indicated a strong push by New York State to seriously invest in the full capacities of AI technology through the opening of the new AI Innovation Institute: a multimillion-dollar push to bring SUNY to the forefront of AI research by awarding grants to research projects across a variety of disciplines.
In a press release, Gov. Kathy Hochul said, “We are not just preparing students for AI — we’re shaping how AI serves society, ensuring it strengthens communities and our economy.”
As Stony Brook and other SUNY campuses look towards a rapidly unfolding future, here’s to appreciating the versatility of our own run-of-the-mill human intelligence, too.