Will AI Change the Role of Cybersecurity?

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

Here’s what the panelists had to say.

Artificial Intelligence Eliminating Jobs?

Terry Ray, CTO of Imperva, started off this discussion by stating he didn’t believe AI would change the day-to-day job of the cybersecurity practitioner. “The practitioner doesn’t need to understand AI/machine learning – they just need to understand it enough to know if the AI-based solutions will solve the problem at hand. For example, the Verizon Data Breach Report noted that only five percent of alerts were being looked at which means 95 percent are ignored. If you can find an AI solution that says, yes, those five percent are the only ones that matter, that is great. However, it isn’t a fundamental change to the practitioner’s day-to-day job,” commented Ray.

“The practitioner doesn’t need to understand AI/machine learning – they just need to understand it enough to know if the AI-based solutions will solve the problem at hand.” – Terry Ray, chief technology officer, Imperva

Disagreeing with Ray was Ali Mesdaq, director of digital risk at Proofpoint, “This space will be disrupted, and the skillset needed in cybersecurity will change. If you are in security and you’re just pushing a button, you better watch out as AI can easily disrupt that type of position. My advice is to focus on areas that AI can’t displace. And to Terry’s point – make sure you understand the technology at some level, but don’t worry that you’ll need to understand it at a level deep enough to develop a new algorithm.

“If you are in security and you’re just pushing a button, you better watch out as AI can easily disrupt that type of position.” – Ali Mesdaq, director of digital risk, Proofpoint

Allison Miller, product strategy, security, at Google stated, “I’m a little more cynical than my colleagues here. Security practitioners must understand this technology. Anyone who is buying a security solution should know enough about AI to be able to figure out if the tool is worthwhile. Testing products is a pernicious problem for security software products, especially those that involve decision technology and operate in a way that a human wouldn’t predict.

My hope for the technology is that we use it in the short term to get the right controls in place and that folks can spend a lot less time running down alerts that aren’t important and focus in on the areas that make a difference, which will increase performance and allow them to scale.”

AI and Malicious Misuse

As moderator Zeichick noted, in The Terminator, the reason that Skynet was turned on was to track down what was hacking the system. In the movie, it turns out it was Skynet itself.  The question he proposed to the panel was, is it possible that AI could be misused or perhaps used by malicious actors and turned against us, especially as it relates to the security domain?

Randy Dean, chief business officer at Launchpad.ai & Fellowship.ai, noted, “Software doesn’t inherently have ethics, and AI is inherently an optimization tool.  If you are giving something with no ethics the ability to optimize whatever it wants there is a high probability that this won’t turn out well. For example, what if you have a car with two people in it and a car with four people in it on a collision course. If we can only save one car, does AI automatically choose the one with four people?

“Software doesn’t inherently have ethics, and AI is inherently an optimization tool.  If you are giving something with no ethics the ability to optimize whatever it wants there is a high probability that this won’t turn out well.” – Randy Dean, chief business officer, Launchpad.ai & Fellowship.ai

With this idea in mind, Dean encouraged the audience on the importance of putting some thought into how much autonomy to give AI systems.

Ray noted, “If you turn on AI you are effectively saying, I’m going to put the control of my security into something else’s hands. It’s going to decide whether I’m secure or not. This requires a new business mentality.”

Continuing this thought, Mesdaq stated, “AI will make mistakes, just like humans. Think about a drone. Today a human must pull the drone trigger and shoot at the target, and they can make a mistake.  But one day soon, we will allow AI to decide if the drone should shoot at a person, but mistakes are still possible.”

Essentially, this is a decision process noted Miller. “I’ve been working with decision technology for more than 10 years, and I’ve not yet seen a set-it-and-forget-it system.  All of those systems were tended by human caretakers who have spent a lot of time and energy sampling, checking and rechecking. Given that machine learning technology learns what you teach it, I think it is important to be very specific and clear about what you are teaching the machine.”

“I’ve been working with decision technology for more than 10 years, and I’ve not yet seen a set-it-and-forget-it system.” – Allison Miller, product strategy, security, Google

“Currently, AI requires a lot of human intelligence,” noted Dean. “I expect this to be true for a long time to come. These systems aren’t magical, they learn what you teach them, and you can teach them some wrong things.”

The difference between a good data scientist and an awesome data scientist is orders of magnitude different in terms of where they can take this technology.  But not to fear, humans will be highly involved in the development of these systems for quite some time,” continued Dean.

To learn what the panel had to say about AI predictions, ransomware or leveraging AI to test cyber defenses, see the video at https://youtu.be/o_qtoa-bhAU.

 

 

 

 

 


Source: imperva

Recommended Posts

Leave a Comment