There’s a question that looms over the heads of many—one that brings up just as much excitement and optimism as it does doom and gloom: how will new technology affect our future?
The SIS Center for Security, Innovation, and New Technology (CSINT) aims to address this question as objectively as possible. According to the center’s founder and director, Professor Audrey Kurth Cronin, there are few institutions weighing both the negative and positive aspects of new technologies—which is why the center is so important.
“It’s not meant to be an alarmist place,” says Cronin. “And it’s not meant to be a cheerleader for every new technology. Every new technology that we’re studying has great potential, but also great potential for harm. And if we don’t at least look at the potential for harm, we will fail to realize the great opportunities that the technology has.”
These technologies include robotics, drones, artificial intelligence, self-driving cars, military robotics, the Internet of Things—the list is lengthy. Cronin emphasizes that the center is dedicated to maximizing the gains and minimizing the losses for human kind, or in other words, strengthening the benefits of new technologies while decreasing the risks.
One example Cronin cites is driverless cars: “How can we minimize the potential for our cars, which are all increasingly computerized, to be hacked by nefarious actors? Imagine if there were a terrorist group, and instead of using vehicle bombs, in which people drive in a suicide attack, they could use driverless cars. So that’s a potential downside. An upside is that you could have an older population able to get around by car, rather than being housebound.”
One of the center’s areas of focus is likely on the minds of most AU students: the future of work. SIS professor Jennifer Poole is specifically honing in on this focus. From an economist’s perspective, she looks at the effects of robotics on the ways companies operate, the types of people they hire and don’t hire, the labor force, and the final products produced.
“The question of the future workplace is one of the reasons I think it’s important for the center to be located at an educational institution,” says Cronin. “When we talk about this, we’re talking to people whose lives will be shaped by it. We’re responsible for teaching students and making sure they’re prepared for not just what the workplace will be when they graduate, but what the workplace will be in 10 years. How can we prepare them to be the most agile and resilient for that?”
CSINT officially launched on Tuesday, February 19 with a book talk given by author and former Pentagon official Paul Scharre. He discussed Army of None: Autonomous Weapons and the Future of War, which Bill Gates named one of his “Five Books I Loved in 2018.”
“The future of humanity will potentially depend on whether we allow individual weapons to have the kind of independence that some people advocate for,” says Cronin. “Paul’s book talks about whether or not it’s a good idea to use artificial intelligence, especially in military weapons. I think it’s a really important issue to address in an opening event for the center, because everyone’s really worried about the implications of militarized artificial intelligence.”
That night, Scharre discussed autonomous weapons that various countries are developing and using, along with the laws of war, the ethical considerations of war, and the role of autonomous weapons in international stability.
“This technology is widely available to all kinds of actors, and of course the US doesn’t have a monopoly on it,” said Scharre. “We don’t get to control the pace at which it spreads or how people use it. We have to react to this world, where not only is the technology that exists today proliferating and being used in ways that we might not like, but it’s also evolving and becoming more advanced. With each generation, we’re seeing more advanced and autonomous drones, not just in the air, but also on the ground, at sea, and under the sea as well.”
Another research focus at CSINT is the ethics of new technologies, an area that Scharre touched upon during the center’s launch event. He described how, when he was part of an Army ranger sniper team in Afghanistan, the Taliban sent a young girl to scout his team’s position: “She was maybe five or six, with goats in tow as a cover. We heard the chirping of what we later found out was the radio she had on her. She was reporting back on our position, and later that day Taliban fighters came.”
Scharre talked about the situation with his team afterward, what they would do if they had come across other civilians like the little girl. They discussed whether they would detain them, pat them down, search them to see if they had radios.
“Something that did not come up was shooting this little girl. No one suggested this was the right thing to do; but under the laws of war, that would have been legal because by scouting for the enemy, she was participating in hostilities,” said Scharre. “So if you had a robot that was designed to perfectly comply with the laws of war, it would have killed this little girl. When you think about this technology, how would a robot know the difference between what’s legal and what’s right? It makes you think about the role of human judgment in weighing the value of human life.”
Along with ethics, CSINT emphasizes history. According to Cronin, the center’s broad historical perspective makes it unique. Looking into how new technologies have been adopted in the past offers insight into how people will use new technologies today.
“Most people think that emerging technologies are completely unprecedented, but many of them were built with US government-funded grants or on government installations in the ‘60s, ‘70s, and ‘80s,” says Cronin. “Your phone’s Siri goes back to voice activation systems that were developed by the military.”
As technology advances, CSINT researchers will continue to learn what human innovation can accomplish, as well as whether such accomplishments are worth the risks:
“[The point of the center] is about thinking through what’s the best way to use something and not jumping to conclusions just because we can. The question is ‘should we’?”