Cybersecurity

Why AI is Every Hacker's Dream

And what companies need to do about it.

By John Ombelets
Associate professor and cybersecurity expert William Robertson

Associate professor and cybersecurity expert William Robertson

The Wall Street Journal noted recently that corporate spending on artificial intelligence is expected to increase nearly 500 percent by 2020. Amid the excitement about AI, autonomous cars, and connected devices, security for these sophisticated systems has received scant attention. William Robertson, co-director of the Systems Security Lab at Northeastern University’s College of Computer and Information Science, says that the rise of AI presents would-be cybercriminals with new targets—and everyone else with new security worries.

Q: What does the proliferation of AI-based computers and machines mean for cybersecurity?

In computer security, we talk about “attack surface”—the parts of a system exposed to hacking, such as the code that processes incoming data. It’s safe to say that AI-based machines now in development, like delivery drones and self-driving cars, will have a substantial attack surface for hackers. Any network-connected machine, for functions like navigation or control, is vulnerable.

Q: Does that include advanced AI computers such as Google’s DeepMind and IBM’s Watson?

Potentially, sure. These machines are not currently operating in an adversarial environment, where they’re facing the threat of an attack actively trying to trick them or steal something. When that changes, these AI programs will be vulnerable. Think about a networked Watson playing that Jeopardy! game, and how it might have worked out if a hacker had been able to redirect Watson’s programming in the middle of the game. That’s why I think one of the next big subfields in cybersecurity is going to be adversarial machine learning.

14.9%: percentage of U.S. households with at least one smart home device in 2017; 60.7% projected in 2021* (excluding connected home appliances and smart TVs)

Q: What do you see as some specific security challenges in the AI realm that will need to be solved?

Take delivery drones, like those that Amazon and other companies are testing. These machines have limited computing power and they run on batteries. Both of those factors complicate the security challenge for control and for confidentiality of data, because advanced encryption is a drain on computing power and battery power.

In the self-driving arena, you have the many extra layers of programming complexity that enables these vehicles to navigate themselves and respond to variable road conditions.

Plus, there’s just the reality that AI has gotten hot, and companies always feel market pressure to get their products out the door quickly and at a competitive price. Designing sophisticated security measures into programs costs a lot of time, and therefore, a lot of money.

Q: Why is that?

Compared to engineering a bridge, software development is a new field, and having any level of assurance that an application is secure is orders of magnitude more complex than ensuring that a bridge adheres to a specific load-bearing capacity. Structural engineering is a very well-understood domain, while this is a relatively new domain that we, frankly, are barely hanging on to from a security perspective. Software development is progressing so rapidly that security experts are struggling to play catch-up.

We’re too often left making compromises, which essentially means building security shells around what we know to be vulnerable after the fact, instead of designing and building in security from the start.

And whether you’re talking about an internet application or a self-driving car, your security design depends on what attacks you want to prevent. What do you think an attacker is likely to do, and what power do you think they’ll have? And there, the sky kind of becomes the limit, because if you can imagine it, a bad guy will probably try it.

Q: Are there positive developments on the horizon that we should be on the lookout for?

Yes, a few. One involves a security approach called control-flow integrity, which could potentially solve a lot of problems. The problem is that it’s been too expensive from a pure performance standpoint. A lot of recent research in the field has focused on trying to drive down the cost, because no one is going to adopt a security approach, no matter how strong, if it’s not usable. Usability has been an often-ignored aspect of systems security research, but thankfully, that’s starting to change—maybe just in time.

*2017 industry report from business intelligence firm Statista

Suggest A Story
^