You are here: American University Kogod School of Business News Getting with the (AI) Future


Getting with the (AI) Future

By  | 

Erran Carmel
Erran Carmel outside Kogod. Credit: Will Diamond.

We don’t often think about it, but we interact with artificial intelligence (AI) and automation every day. From your curated Facebook feed to the match you made last night on a dating app, AI has become embedded in our society and culture.

Movies like Her or Ex Machina illustrate a world where AI steps out from behind the screen and into the hustle and bustle of real life. And while science fiction often portrays AI as robots that possess human-like characteristics, it stretches way beyond those examples.

Kogod’s Erran Carmel, a technology and analytics professor, broke down the basics of artificial intelligence for us. There are two types of AI, he explained, narrow and general. Narrow AI, or weak AI, can only perform a small set of tasks.

“Look at Siri,” he said. “It’s able to answer lots of different kinds of questions, but it can’t process your checking account, make payments, or wash the dishes for you. It’s still somewhat narrow.”

On the other hand, general AI, also known as AGI or strong AI, is the long-term aspiration for many researchers in the industry—creating machines that could outperform humans at almost all cognitive tasks.

“We’re seeing, as consumers, the beginning of the AI revolution,” Carmel said.

As artificial intelligence develops, it will become intertwined with certain industries, challenging us to reimagine the job market. In a December 2017 report, the McKinsey Global Institute predicted that by 2030, 60 percent of occupations will have at least 30 percent of work activities automated, resulting in 75 to 375 million workers, or 3 to 14 percent of the global workforce, in need of a new career.

Autonomous vehicles, produced by companies like Uber, Google and Tesla, are a particularly relevant example. While all are still in the testing and development phase, with Uber freezing its public testing after a deadly car crash, milestones have been made that could have driverless vehicle programs launched in the 2020s.

Carmel has his own predictions about driverless vehicles hitting the roads: by 2030, about 10 percent of driving trips will be autonomous. And while that will certainly result in job loss, Carmel points to moments throughout history when technical innovation decimated industries and pushed individuals into new fields.

“When I started my career, there were typists and secretaries everywhere, even here at American University,” he said. “They’re all gone now. We all rethought our jobs and everybody now does their own typing. I am less pessimistic about the threat of jobs becoming obsolete.”

As AI becomes further ingrained in every element of our lives, industries—especially the government—are also reimagining how they can best serve and support society. Just this month, the United States Department of Defense announced that it will be funneling $2 billion into incorporating artificial intelligence as weaponry over the next five years.

At the moment, AI is capable of choosing targets and firing weapons, but can’t provide reasoning for those choices. Technologists are currently researching how computers can explain key decisions to humans on the battlefield—especially as variables shift and decisions must be altered. This ability for AI to provide a rationale for choices will shape its future use in the military.

“There’s so many areas of AI that can be militarized. The ability of future wars to be AI-driven is quite significant,” says Carmel.

From militarized drones to algorithm-driven apps, there’s no telling what the future holds for AI, though Carmel has a number of projections—one being that AI will eventually be given personhood by the Supreme Court. This brings up a deeper philosophical issue: consciousness. While we might get a clever comeback from Siri or a helpful tip from Alexa, consciousness would be a new level of sophistication.

“In order for an AI to be given personhood from a legal perspective, it has to have consciousness. We have to recognize in it that it has emotion,” he said. “As our AI gets a lot more sophisticated, I anticipate that we’ll be able to understand what consciousness means.”

Learn more about Erran Carmel and his background in technology.