At the Intersection of Science and AI
At a Data Driven Discovery Initiative event, faculty from psychology, chemistry, philosophy, criminology, economics, and earth and environmental science, shared their thoughts on how AI is fueling scientific advancements—and raising new questions.

Sudeep Bhatia, Associate Professor of Psychology, spoke about the possibilities for AI use in his field, like the opportunity to understand complex human behavior. He was one of six Penn Arts & Sciences faculty who offered insights at the DDDI event.
Artificial intelligence is reshaping how we think about pretty much everything. At an April 24 Data Driven Discovery Initiative (DDDI) discussion, AI took center stage as faculty from a diverse range of departments explored how advancements in AI and technology are shifting fields from chemistry and criminology to psychology and philosophy.
“These technologies are affecting every aspect of our mission, from how we teach and learn to how we can research and generate new knowledge,” said Mark Trodden, incoming dean of Penn Arts & Sciences and Fay R. and Eugene L. Langberg Professor of Physics, in opening remarks. “They are opening new possibilities, raising new questions, and challenging us to think critically about how we understand power.” AI, he went on to note, is a “transformative force with the potential to dramatically accelerate discovery.”
Six faculty experts then provided their perspectives on the topic.
Maria Cuellar: AI in Criminology
One early AI adopter has been the field of criminology, said Maria Cuellar, Assistant Professor of Criminology. Facial recognition technology, or FRT, is in active use by the FBI and the Department of Defense, as well as by Philadelphia police, she said. “It has the potential to be a lot more accurate than something like fingerprint comparisons.”
But Cuellar also emphasized that the technology is still developing and there have been plenty of instances when it has fallen short. One issue with FRT, Cuellar noted, is that it doesn’t deal well with “realistic” images, the kinds typically captured off a security or street camera—and not the kind on which security typically rely.

Maria Cuellar, Assistant Professor of Criminology, spoke about the use of facial recognition technology, which has the potential to be more accurate than fingerprinting but also has “unresolved issues” around ethics, reliability, and fairness.
“FRT has high accuracy under high-quality image settings,” she said. “The question is, what’s happening with the realistic images that the police have?”
In her research, Cuellar has observed that FRT has a higher false positive rate for Black and Asian individuals, as well as for women, than it does for white individuals and for men. She said she hopes that pointing out these sorts of problems will ultimately encourage people to understand the flaws and opportunities for error within systems like FRT. That could help improve how aggressively the technology is used, as well as areas that need strengthening as it develops.
Cuellar is also aware of the ethical concerns posed by FRT and similar technologies. “There is an urgent need for research in this area to address unresolved issues around the reliability, fairness, and potential benefits and harms of technological applications in criminal justice.”
Irina Marinov: Shifting How Climate Research Happens
Irina Marinov, Associate Professor of Earth and Environmental Science, admitted she’s still mastering many of the new AI tools now at her disposal as machine learning evolves.
“The best way to learn is to make it part of a class,” she said, pointing to her Climate and Big Data course, which teaches students—and Marinov herself—how to navigate large datasets while applying a mixture of climate and computer science skills.
Marinov studies interactions between the ocean and atmosphere in the midst of climate change. “I run big climate models, and then I analyze the output of those climate models,” she explained. “I also analyze satellite datasets, observational datasets, to look at both the role of the oceans in climate and the response of the oceans to climate.”
For a project of Marinov’s focused on global climate security, she’d been using machine learning to quickly assess datasets that include media coverage of environmental events and protests and compare them with observations of natural disasters, drought statistics, and other climate datasets. Though it was recently defunded as part of the government’s sweeping cuts, Marinov pointed to the initial results as a key example of how AI is shifting the way climate and earth sciences researchers do their work.
Ultimately, she added, it has the potential to help the field understand how climate change is affecting societies and global migration.
Jesús Fernández-Villaverde: Upending Economics
Machine learning has “completely changed” economics, said Jesús Fernández-Villaverde, Howard Marks Presidential Professor of Economics.
To exemplify this, he explained an extensive project he’s working on that looks into the relationship between economic sanctions on oil and a rise in the use of “dark ships.” Those oil tankers, which operate illegally, have kept the energy source flowing into countries after they face western sanctions.

Jesús Fernández-Villaverde, Howard Marks Presidential Professor of Economics, discussed the ways that machine learning has “completely changed” economics.
Understanding how that occurs is challenging—dark ships can operate because they employ tactics like Automatic Identification System manipulation, making them invisible to global tracking systems. But Fernández-Villaverde’s team used AI to build a model that could measure a ship’s proximity to ports, as well as the likelihood of dark ship-to-ship transfers.
Ultimately, they found that around one-fourth of the global crude-oil tanker fleet between 2017 and 2023 comprised dark ships.
“It’s the kind of finding that would previously have been tricky to determine but has become much more feasible as technology has evolved,” he said. And it’s just one example, he added. “I now have data that we wished we had and didn’t have before.”
Sudeep Bhatia: Understanding Complex Human Behavior with AI
For psychology researchers, AI offers much more than an opportunity to improve tools and research measurements, according to Sudeep Bhatia, Associate Professor of Psychology. He pointed back decades to the development of the computer, which gave researchers powerful tools for studying human behavior and helped birth modern psychology. As AI grows and expands, a similar shift is playing out, he said.
For example, AI is not only changing how researchers study humans, but psychology experts are also interested in AI-human interaction and in using AI to better understand the human mind. AI, Bhatia argued, “gives us an understanding of human experience and human behavior in the wild in a way we could never hope to achieve before the advent of these large language models.”
That might not be entirely positive for humans as a species, he added; after all, some research has shown downsides to how quickly people project humanity onto AI chatbots, with their long-term mental health possibly suffering in the process.
But when it comes to research, new advancements mean new opportunities, Bhatia concluded. “Before the recent generation of AI, all we could do were simple analyses. Growth in technology opens a completely new way of understanding complex human behavior.”
Andrew Zahrt: A ‘Profound Effect’ on Chemistry
“This isn’t primarily a research talk,” said Andrew Zahrt, Assistant Professor of Chemistry. “I’m mostly trying to convey the way artificial intelligence has developed within the field of organic chemistry and health tech. We’ve been trying to figure out how to use computers to accelerate discovery and chemistry pretty much as long as they’ve been around.”
Here, Zahrt cited the 2024 Nobel Prize in Chemistry, which centered on protein discoveries. Part of that included groundbreaking work from a team at Google DeepMind, which developed an AI model solving what the Nobel committee called “a 50-year-old problem” predicting proteins’ complex structures. Their AI model, AlphaFold2, helped them predict the structure of nearly all 200 million proteins ever identified by researchers.
I’m mostly trying to convey the way artificial intelligence has developed within the field of organic chemistry and health tech. We’ve been trying to figure out how to use computers to accelerate discovery and chemistry pretty much as long as they’ve been around.
That technology has now been used by more than two million people in 190 countries for a wide range of scientific applications, helping researchers understand issues like antibiotic resistance and how to decompose plastic more quickly. “This has had a pretty profound effect on the field,” Zahrt said.
AI doesn’t always work perfectly, Zahrt added, noting some instances when it might create interesting information about chemical reactions that ultimately are “not synthetically useful.” But, he concluded, machine learning has a lot to contribute to chemistry as a field.
Carlos Santana: Philosophy as Translators
Carlos Santana, Associate Professor of Philosophy, understood inherently that people might be confused about his involvement in a conversation on AI and science, but had a swift explanation. “In philosophy science, we often end up being effective translators between people from different research communities.”
Case in point is a long-term project Santana has been assisting with using AI tools to help address an underlying systemic problem: how conservation workers around the world make decisions. Though there’s plenty of scientific research available, he observed, there’s a tendency within the field to rely on conventional wisdom, rather than seek out vetted literature.
An international collaboration he’s part of could help fix that, bringing together ecologists, computational linguists, and computer scientists to create a tool for people in this field that draws on academic research but offers context specific to their areas of interest. Implementation is still a ways off, Santana said, “but collaborators hope it will one day enable conservation workers globally to ask their questions and quickly yield an answer.”
Santana underscored that he’s not bringing ample computer skills to the project, but rather his role is as “the skeptic and the naysayer. “There are some risks here,” he said. “But I also think there’s a lot of potential for AI tools to improve the science policy interface in areas like this that could be stronger.”