When Should We Expect the Robot Army?

Arts and Sciences faculty on the fact, fiction, and future of artificial intelligence.

Monday, November 21, 2016

By Susan Ahlborn 
Illustrations by Ashley Mackenzie

“In popular culture, there’s a pretty simple equation: If you take a machine, give it a brain, and give it a gun, the first thing it wants to do is destroy humanity,” says Michael Horowitz, an associate professor of political science. “I think it says more about how we think of ourselves than it does about the machines.”

And though The Terminator or similar science fiction is the first thing that comes to mind for some, artificial intelligence in other forms is already a part of our world. What about a program that can help doctors better understand variations in autism? Or an algorithm that finds patterns in literature we never would have seen? What if the “killer robot” is a weapon system that can protect military bases and ships from missile attacks?

Artificial intelligence (AI) is defined in many different ways. These stretch from machine learning, when computer programs can teach themselves to grow and change as they are exposed to new data, to an as-yet nonexistent true artificial intelligence that can interact with reality and set its own goals. 

Concern about new technology didn’t start with AI. “When I hear people say this is the first time in history we are really scared of technology, I ask them to read poetry from the Thirty Years’ War, which went on from 1618 to 1648,” says Heidi Voskuhl, an associate professor of history and sociology of science. “It’s filled with a despondency that has to do with the wartime technologies of destruction.” 

As we navigate the fourth industrial revolution, people are again worried about losing their jobs to machines, about being killed by a machine, about dehumanization. At the same time, no one ever compared the internal combustion engine to a child, as reporter Charlie Rose did with IBM’s Watson during a story on 60 Minutes.

From weapons to language to memory to literature, Penn Arts and Sciences faculty are not only developing new forms and applications for AI, they’re thinking through its implications—for better and worse.

 



THE SOULLESS SOLDIER

 


Michael Horowitz, associate professor of political science



Associate Professor of Political Science Michael Horowitz first became interested in next-generation defense tools during a fellowship year at the Pentagon. The Associate Director of Penn’s Perry World House, Horowitz has written and spoken extensively on military applications of AI. In 2015 he addressed a United Nations assembly in Geneva, Switzerland, dedicated to emerging issues related to autonomous weapons technologies.

When used to describe a weapon, “autonomous” means one that operates independently. The U.S. and 25 other nations now have close-in weapon systems that defend against incoming missiles. These can be run by a human but have an automatic mode that will detect threats and counterattack if threats are coming in too fast for a human to handle. A person still has to turn it on, however.

Could there ever be a weapons system operated by algorithms that could activate itself? Not anytime soon, according to Horowitz. “But it’s already raising questions, one of the largest being, “Is there something inherent about being a person that means a machine should not have the power to decide whether you live or die?”

He also points out that if the goal in war is to win in a way that minimizes casualties and suffering, AI might be a way to do that. “If you imagine machines that don’t get tired, don’t get angry, and can responsibly execute commands, it’s also possible to imagine applications of AI that can reduce human suffering in a war, and that’s not a bad thing either. It’s a real dilemma.”

Horowitz predicts that over the next 10 years, we’ll start to see the introduction of more machine autonomy into militaries, but in basic ways like the automation of logistical processes and more use of autopilot. “A big innovation push for the U.S. military right now surrounds what they call human-machine teaming, which is explicitly about trying to leverage emerging technologies to help people make better decisions,” he says. “This concept of teaming also has applications well beyond the military context, which makes sense since much of the innovation in AI is happening in the commercial and academic sectors.”


THE PURPOSE-DRIVEN ROBOT

 


Lisa Miracchi



Penn Engineering’s General Robotics, Automation, Sensing, and Perception (GRASP) Lab might seem a weird place to find an assistant professor of philosophy. But theoretical roboticist Lisa Miracchi is studying intelligence itself: what it is to be an intelligent being, and how the non-intelligent, non-agential elements that make us up can add up to an intelligent being.

“Unlike the brain, we understand how robots work really well, and that gives us a new perspective on the problem,” she says. The collaboration has motivated her to get more concrete and specific about some of her ideas—and to brush up on her calculus.

Miracchi, who is authoring a chapter in the forthcoming book The New Evil Demon: New Essays on Knowledge, Justification, and Rationality, thinks that defining AI in computational terms is too simple. She’s interested in what she calls agency—what it is for a system to have its own goals. “We can get computers to beat world-class chess players, but we don’t think they’re really intelligent, and why not?” she asks. “The best thing I can come up with is that it doesn’t matter to the computer. It’s a tool accomplishing the goals of its programmers. My dog couldn’t play chess to save his life, but he has his own goals.”

Computational systems operate completely internally, with only the data they are given, and the contents of that data are irrelevant for understanding how they work. Miracchi thinks that humans and other agents have precisely the opposite property. “The contents of my mental life are really important for explaining why I do what I do. I think somehow the relationship between the brain, the body, and the environment puts us in certain special kinds of relationships to things in the world.” 

In principle, Miracchi sees no reason that an artificial intelligence could not be created, and nothing to indicate that only systems that have neurons can be intelligent. “Maybe we’ll discover that organic systems have special properties that are really important for having consciousness and agency. I don’t think we have any reason to think that’s true yet,” she says, then adds, “It would be really interesting if it wasn’t. I don’t have an agenda as far as that’s concerned.”



THE MEMORY CHIP

 


Michael Kahana



With a grant from the Defense Advanced Research Projects Agency (DARPA), Professor of Psychology Michael Kahana is working on a device that could restore memory function in patients with neurological disease or brain injury.

Kahana studies the basic functions of how we store and retrieve memories. Through a unique collaboration with neurosurgeons and neurologists at eight academic medical centers, including the Perelman School of Medicine, Kahana’s team studies memory in patients who have electrodes implanted in their brains as part of the neurosurgical treatment of intractable epilepsy. By monitoring electrical signals recorded from these electrodes as patients play memory games on a bedside computer, the team has been able to identify patterns of brain activity that are indicative of good versus poor memory function.

“Using these signals we can predict when a studied item will be remembered and when it will be forgotten,” Kahana says. “We can also predict when someone is about to correctly remember a previously studied item and when they are about to make a memory error, recalling an item that was not actually studied,”

Now his team is exploring whether these biomarkers of good memory function can be used to help restore memory in impaired individuals. They are doing this using targeted electrical stimulation in small areas within deep brain regions that are important for memory function. “By examining how electrical stimulation modulates the neural signatures of good memory, we can determine when, where, and how to stimulate the brain to improve function,” says Kahana.

Kahana’s team is collaborating with Medtronic Corporation and Lawrence Livermore National Laboratory to build next-generation brain stimulator technology and high-density electrodes that will be able to greatly enhance the ability to responsively stimulate the brain to improve function.

Kahana emphasizes that this kind of memory chip is not intended to “give people super memories.” Rather, he hopes to “promote people’s natural abilities so they are at the top of their game most of the time.”

Asked to speculate about whether some kind of technology will ever be able to augment the brain to perform beyond its natural abilities, he compares it with eyeglasses that could make your vision more than perfect. “But given that the brain is such an amazingly tuned device to solve its problems, I think it’s going to be a lot harder to improve on its capabilities than to restore them.” 



THE ELECTRONIC EAR

 


Mark Liberman



One of the challenges facing people diagnosed with autism is difficulties in communication. It has also become more clear that “autism” is not a single condition that fits easily into a mold. “People use the word ‘spectrum,’ but even that is misleading,” says Mark Liberman, Christopher H. Browne Distinguished Professor and chair of the Department of Linguistics. “It’s really a multidimensional space. And furthermore, it’s a corner, or probably several corners, of a space we all live in.”

Liberman and his team have been working with Penn’s Center for Autism Research to transcribe and digitize the answers given for a section of the Autism Diagnostic Observer Spectrum (ADOS). Looking at many aspects, including the words used, the rate of speech, and the amount of pausing versus speaking, they found that there was a good deal of differentiation among ASD children and those with other disorders such as ADHD.

“We would like to join with other sites to examine several thousand interviews in the hope of really being able to learn something about what the true dimensions underlying this area are,” says Liberman. The work may eventually also help to develop a shorter and easier way to diagnose the condition and test the effectiveness of treatments.

Linguists at Penn and elsewhere are also working in areas including speech recognition and synthesis, machine translation, and information retrieval from texts. One way it will affect our lives, Liberman believes, is that the automated systems that handle phone calls to many businesses will probably be able to perform as well as humans in five to 10 years.

“There are people whose livelihoods will be affected, just as there were lots of weavers whose livelihoods were affected by the invention of the power loom,” he says. “Norbert Wiener, who invented the term cybernetics, observed in the 1950s that machine labor is a kind of slave labor, and that people who have to compete with that have to accept in effect the conditions of slave labor. He suggested that if we think about it right, maybe we can find ways to make the transformations in a more humane fashion.”




THE LITERARY CRITIC

 


James English



As chair of the 2016 National Book Award’s fiction committee, James English had his own reading list this year. The John Welsh Centennial Professor of English and his fellow writers and literary scholars started with 400 novels and slowly narrowed them down to the winner. Or they could have just asked a computer which book would win. 

“Last year, Andrew Piper at McGill did something nifty,” says English. Piper digitized the shortlisted and winning books for Canada’s Giller Prize from the last 20 years, then ran an algorithm to pick that year’s winner. “The computer basically looks for an outlier, the one that stands out,” says English, faculty director of Penn’s Price Lab for the Digital Humanities. But the computer generated two outliers. English describes the more extreme one as “avant-garde, quite unusual to even be considered”; the other was more traditional. Says English, “Andrew, in his wisdom as a literary scholar, predicted the more traditional book would win. But in fact the book that the machine predicted won the prize. 

For English’s own Contemporary Fiction Database project, his team is working with books since 1960 in two groups: top 10 bestsellers, and books that were shortlisted for or won a major award. They discovered that around the beginning of the ‘80s, the two categories separated along temporal lines. Most award-considered books since then are set more than 20 years in the past, while 90 percent of bestsellers now take place in the present or future.

Now they’ve created and trained an algorithm to tell when a book is set more than 20 years before its publication. English says, “Historical novels are hard to discover algorithmically, but it turns out to be a very important distinction.”

“The machine can’t explain the patterns it sees, but it is making itself smart about a social process,” he says. “Literature is a very complicated social practice. This is where literary studies converge with a lot of what’s being done in the social sciences working with Twitter data. Rather than making assumptions and building models, we’re letting the machines find what they will based on frequencies, and then tell us what they see.”



THE DIGITAL MIRROR

 


Heidi Voskhuhl



Worries about inhuman humanoids started long before any actual robots were created. From ancient Greece’s Cadmus turning dragon’s teeth into soldiers to early 20th century media that racialized and sexualized robots, “it’s about power,” says Heidi Voskuhl, an associate professor of history and sociology of science. “But it’s just one manifestation of our fears about technology.”

Voskuhl studies the history of technology from the early modern to the modern period and is the author of Androids in the Enlightenment: Mechanics, Artisans, and Cultures of the Self, which won the Jacques Barzun Prize in Cultural History of the American Philosophical Society. Androids and robots have a particular fascination for us because they look and act like we do, but Voskuhl thinks our identification with technology doesn’t stop there. “I believe one powerful thing about our psyche is a desire to identify with something outside ourselves. Otherwise we don’t learn from each other, and you never see yourself mirrored. We recognize ourselves in anything.”

We may also identify with technology because we view it as competition. “Speech recognition is very different from an automated loom run by a steam engine,” says Voskuhl. “However, the concern that we’re made obsolete, that is fairly old.”

She points out that, as a democratically governed people, we feel we are not really in charge of where technology is going. “I might even say that technology progresses autonomously and we don’t have input on it, something called technological determinism,” she says. There are now also concerns that some seemingly objective algorithms may be unfair. “If we get denied a mortgage, it’s probably some algorithm. I think even your zip code goes in there.”

However, Voskuhl warns against jumping to conclusions or settling for quick answers. “People sensationalize because our desire to see something is so strong. But it’s okay to say we don’t know.”