Stony Brook University Libraries hosted a panel discussion called “Democracy in the Digital Age: Artificial Intelligence’s (AI) Influence on the 2024 Elections” in the Charles B. Wang Center theater on Monday, Sept. 30 from 12:30 p.m to 1:50 p.m.
Stony Brook University Dean of Libraries Karim Boughida moderated the panel which featured four panelists, each experienced in AI from different fields.
Boughida opened by introducing Executive Vice President and Provost Carl Lejuez, who highlighted the importance of hosting informative events in AI.
“As we are growing the AI3 Institute, this [event] is an opportunity for us to figure out where we are trying to go as a university and these events are really important for that,” Lejuez said. After his speech, the panelists introduced themselves and their credentials.
Assistant Professor in the School of Communication and Journalism Musa al-Gharbi discussed his studies in sociology.
He emphasized the importance of avoiding assumptions when engaging in conversations with people of opposing political views, and when electoral results do not agree with someone’s preference, especially during the election.
“What I would encourage people to do … is to not look at ways of writing off people they disagree with as being bots or easily manipulated, but to take them seriously … and to assume that people are roughly as reasonable and as well informed as yourself [and] to take their views seriously when we disagree with them,” al-Gharbi said.
After al-Gharbi’s introduction, Assistant Professor of Psychology at American University Thomas Costello spoke. Costello studies social and political beliefs, as well as their origins and evolution. His research uses AI tools to produce persuasive arguments.
One study Costello conducted involved people writing about a conspiracy theory they believe, offering facts supporting it and then having a conversation with GPT-4, the model that powers ChatGPT, in which GPT-4 will try to debunk their theory.
“In having a dialogue [or] a disagreement, you’re moved into this level of processing where you’re forced to really think and be rational, and other kinds of political messaging and advertising doesn’t tap into that,” Costello said.
After about an eight-minute conversation, there was a 20% decrease in the person’s belief in the conspiracy, Costello said. He added that in one in four cases, people disavowed their entire original conspiracy theory they believed.
This supported Costello’s theory that conversations shift people in a direction of reconsidering their beliefs or thoughts when presented with evidence, in comparison to political promotion.
Following Costello, Senior Product Marketing Manager in AI at GitHub Paige Lord spoke. She is responsible for AI being used ethically in their market. Lord began studying AI in 2016 through facial recognition software in law enforcement throughout the United States. She has built a platform on ethical AI use, posting videos on TikTok about the subject, which was how Boughida found her and invited her to speak at the panel.
“I don’t think you should have to have an advanced degree in a topic like AI in order to understand how it’s having an impact on you and your community,” Lord said.
She shared numerous goals in ensuring political conversations involving AI are not sensational, but used truthfully to help the public understand the harmful ways in whichAI can be misused amidst the upcoming elections.
Lord said, “My focus and my energy goes toward debunking some of the claims that are being made … ensuring that I share as factual an account as I possibly can about what’s happening and help people to understand why there are nefarious uses of AI in the elections and what the movies might be.”
The fourth speaker was Klaus Mueller, a professor and Interim Chair of the Department of Technology and Society at Stony Brook, who shared his research to help people understand data visualization. As part of his studies, he evaluates causation and flaws in clauses of reasoning.
Boughida asked Lord how she sees AI influencing this election and public opinions. Lord responded by mentioning that AI is accessible to nearly everybody and provides the tools to create anything to output their opinion into the world.
Lord addressed that during the election, there is gaining traction toward the assumption that everyone can differentiate between AI and reality. She said AI can lead to people switching their political views, especially those who are in the middle and can be swayed.
She gave examples such as a parody video reposted by Elon Musk on X. In the tweet, a Trump supporter used an AI-generated audio to sound like Vice President Kamala Harris insulting herself and President Joe Biden over a collage of footage from Harris’ rallies and events.
“We’re getting into a dangerous place with social media and AI,” Lord said. “We’re assuming that because we understand that it’s not true or real, that everybody has the same level of understanding [but the reality is that we are] in a world where we have a digital divide and people with different levels of education and different exposure to understanding technology like AI.”
Lord cautioned against these assumptions, especially posts from people who do not clarify that it’s a parody because it could be unclear to viewers who might be susceptible in struggling to decipher what is real and what is not.
al-Gharbi added to Lord’s statement, emphasizing how people tend to not critique what is in front of them.
“While it’s true we shouldn’t take for granted that everyone else thinks and reasons the way we do … I think that we often misunderstand the ways people who are associated with the knowledge industry differ from anyone else,” al-Gharbi said.
Mueller said through this misinformation, that Costello’s research is hopeful by using ChatGPT to present facts and could be a teachable moment in people’s flaws of reasoning.
“While it’s true we shouldn’t take for granted that everyone thinks and reasons the way we do … we highly educated, cognitively sophisticated people tend to be a lot more dogmatic … a lot less resistant to changing our minds when confronted with facts that challenge our priors, precisely because we are cognitively sophisticated,” al-Gharbi said.
Mueller shared in his research that he successfully used ChatGPT by showing graphs of causal networks, which model complex relationships between variables, and asking it 64 questions such as what ChatGPT thinks of the causal networks. In response, ChatGPT corrected the mistakes and produced suggestions and alternate graphs.
Mueller suggested that a tool similar to ChatGPTs ability to produce a correct version of the graph could be used on social media platforms, such as X, as a fact-checker if posts are factually correct and offering to fix the mistakes.
al-Gharbi then steered the conversation to focus on the prominence of misinformation spreading during an election cycle. He mentioned an article he wrote for The Guardian on studies suggesting that most people who share fake news hold a low confidence level and that it is genuinely true. He said people are not always sharing misinformation to persuade others, but as satire and for the comedic aspect.
Lord explained how imperative it is to hold those who misuse AI accountable.
“We need to take a look at the biggest harms right now with AI … for example, sexually explicit content being created with AI, there are ways for us to extend policy right now so that people that are creating this could be prosecuted and held accountable,” Lord said.
In an interview with The Statesman, Boughida said hosting events such as the AI panel is part of the mission of libraries. He has been studying AI for a decade and combined his knowledge and resources to create conversations amongst students to help navigate through information during the election.
“I wanted this [event] to be student centric, not faculty centric. The underlying message is to [tell] students to go vote and be aware of the impact of AI,” Boughida said after the event.