TechNews Pictorial PriceGrabber Video Fri Apr 19 18:27:09 2024

0


AI will transform information security, but it won’t happen overnight
Source: Doug Drinkwater


Although it dates as far back as the 1950s, Artificial Intelligence (AI) is the hottest thing in technology today.

An overarching term used to describe a set of technologies such as text-to-speech, natural language processing (NLP) and computer vision, AI essentially enables computers to do things normally done by people.

Machine learning, the most prominent subset of AI, is about recognizing patterns in data and computer learning from them like a human. These algorithms draw inferences without being explicitly programmed to do so. The idea is the more data you collect, the smarter the machine becomes.

At consumer level, AI use cases include chatbots, Amazon’s Alexa and Apple’s Siri, while enterprise efforts see AI software aim to cure diseases and optimize enterprise performance, such as improving customer experience or fraud detection.

There is plenty to back-up the hype; A Narrative Science survey found that 38 percent of enterprises are already using AI, growing to 62 percent by 2018, with Forrester Research predicting a 300 percent year-on-year increase in AI investment this year. AI is clearly here to stay.
Security wants a piece too

Unsurprisingly given the constant evolution of criminals and malware, InfoSec also wants a piece of the AI pie.

With its ability to learn patterns of behavior by sifting through huge datasets, AI could help CISOs by finding those ‘known unknown’ security threats, automating SOC response and improving attack remediation. In short, with skilled personnel hard to come by, AI fills some (but not all) of the gap.

Experts have called for the need of a smart, autonomous security system, and American cryptographer Bruce Schneier believes that AI could offer the answer.

“It is hyped, because security is nothing but hype, but it is good stuff,” said the CTO of Resilient Systems.

“We’re a long way off AI from making humans redundant in cybersecurity, but there’s more interest in [using AI for] human augmentation, which is making people smarter. You still need people defending you. Good systems use people and technology together.”

Martin Ford, futurist and author of ‘Rise of the Robots’, says both white and black hats are already leveraging these technologies, such deep learning neural networks.

“It's already being used on both the black and white hat sides,” Ford told CSO. “There is a concern that criminals are in some cases ahead of the game and routinely utilize bots and automated attacks. These will rapidly get more sophisticated.”

“...AI will be increasingly critical in detecting threats and defending systems. Unfortunately, a lot of organizations still depend on a manual process -- this will have to change if systems are going to remain secure in the future.”

Some CISOs, though, are preparing to do just that.

“It is a game changer,” Intertek CISO Dane Warren said. “Through enhanced automation, orchestration, robotics, and intelligent agents, the industry will see greater advancement in both the offensive and defensive capabilities.”

Warren adds that improvements could include responding quicker to security events, better data analysis and “using statistical models to better predict or anticipate behaviors.”

Andy Rose, CISO at NATS, also sees the benefits: “Security has always had a need for smart processes to apply themselves to vast amounts of disparate data to find trends and anomalies – whether that is identifying and stopping spam mail, or finding a data exfiltration channel.

“People struggle with the sheer volume of data so AI is the perfect solution for accelerating and automating security issue detection.”
Security use cases sees start-ups boom

Security providers have always tried to evolve with the ever-changing threat landscape and AI is no different.

However, with technology naturally outpacing vendor transformation, start-ups have quickly emerged with novel AI-infused solutions for improving SOC efficiency, quantifying risks and optimizing network traffic anomaly detection.

Relative newcomers Tanium, Cylance and - to lesser extent - LogRhythm have jumped into this space, but it’s start-ups like Darktrace, Harvest.AI, PatternEx (coming out of MIT), and StatusToday that have caught the eye of the industry. Another relative unknown, SparkCognition, unveiled what it called the first AI-powered cognitive AV system at BlackHat 2016.

The tech giants are now playing with AI in security too; Google is working on AI-based system which replaces traditional CAPTCHA forms and its researchers have taught AI to create its own encryption. IBM launched Watson for Cyber Security earlier this month, while in January Amazon acquired Harvest.AI, which uses algorithms to identify important documents and IP of a business, and then user behavior analytics with data loss prevention techniques to protect them from attack.

Some describe these products as ‘first-gen’ AI security solutions, primarily focused on sifting through data, hunting for threats, and facilitating human-led remediation. In the future, AI could automate 24x7 SOCs, enabling workers to focus on business continuity and critical support issues.

“I see AI initially as an intelligent assistant – able to deal with many inputs and access expert level analytics and processes,” agrees Rose, adding AI will support security professionals in “higher level analysis and decisions.”

Ignacio Arnaldo is chief data scientist at PatternEx, which offers an AI detection system that automates tasks in SecOps, such as the ability to detect APTs from network, applications and endpoint logs. He says that AI offers CISOs a new level of automation.

“CISOs are well aware of the problems - they struggle to hire talent, and there are more devices and data that need to be analyzed. CISOs acknowledge the need for tools that will increase the efficiency of their SOCs. AI holds the promise but CISOs have not yet seen an AI platform that clearly/proves to increase human efficiency.”

“More and more CISOs fully understand that the global skills shortage, and the successful large-scale attacks against high maturity organizations like Dropbox, NSA/CIA, and JPMorgan are all connected,” says Darktrace CTO Dave Palmer, whose firm provides machine learning technology to thousands of companies across 60 countries worldwide.

“No matter how well funded a security team is, it can’t buy its way to high security using traditional approaches that have been demonstrably failing and that don’t stand a chance of working in the anticipated digital complexity of our economy in 10 years’ time.”
AI underdone by basics, cybercrime

But for all of this, some think we’re jumping the gun. AI, after all, seems a luxury item in an era in which many firms still don’t do regular patch management.

At this year’s RSA conference, crypto experts mulled how AI is applicable in security, with some questioning how to train the machine and what the human’s role is. Machine reliability and oversight were also mentioned, while others suggested it’s odd to see AI championed given security is often felled by low-level basics.

“I completely agree,” says Rose. “Security professionals need to continually reassess the basics – patching, culture, SDLP etc. – otherwise AI is just a solution that will tell you about the multitude of breaches you couldn’t, and didn’t, prevent.”

Schneier sees it slightly differently. He believes security can be advanced and yet still fail at the basics, while he poignantly notes AI should only be for those who have got the security posture and processes in place, and are ready to leverage the machine data.

Ethics, he says, is only an issue for full automation, and he’s unconcerned about such tools being utilized by black hats or surveillance agencies.

“I think this is all a huge threat,” says Ford, disagreeing. “I would rank it as one of the top dangers associated with AI in the near to medium term. There is a lot of focus on "super-intelligent machines taking over"...but this lies pretty far in the future. The main concern now is what bad people will do when they have access to AI.”

Warren agrees there are obstacles for CISOs to overcome. “It is forward thinking, and many organizations still flounder with the basics.”

He adds that with these AI benefits will come challenges, such as the costly rewriting of apps and the possibility of introducing new threats. “...Advancements in technology introduce new threat vectors.”

“A balance is required, or the environment will advance to a point where the industry simply cannot keep pace.”
AI security is no panacea

AI and security is not necessarily a perfect match. As Vectra CISO Gunter Ollmann blogged about recently, buzzwords “have made it appear that security automation is the same as AI security” - meaning there’s a danger of CISOs buying solutions they don’t need, while there are further concerns over AI ethics, quality control and management.

Arnaldo critically points out that AI security is no panacea either. “Some attacks are very difficult to catch: there are a wide range of attacks at a given organization, over various ranges of time, and across many different data sources.

“Second, the attacks are constantly changing...Therefore; the biggest challenge is training the AI.”

If this points to some AI solutions being ill-equipped, Palmer adds further weight to the claim.

“Most of the machine learning inventions that have been touted aren’t really doing any learning ‘on the job’ within the customer’s environment. Instead, they have models trained on malware samples in a vendor’s cloud and are downloaded to customer businesses like anti-virus signatures. This isn’t particularly progressive in terms of customer security and remains fundamentally backward looking.”

So, how soon can we see it in security?

“A way off,” notes Rose. “Remember that the majority of IPS systems are still in IDS mode because firms lack the confidence to rely on ‘intelligent’ systems to make automated choices and unsupervised changes to their core infrastructure. They are worried that, in acting without context, the ‘control’ will damage the service – and that’s a real threat.”

But the need is imperative: “If we don't succeed in using AI to improve security, then we will have big problems because the bad guys will definitely be using it,” says Ford.

“I absolutely believe increased automation and ease of use are the only ways in which we are going to improve security, and AI will be a huge part of that,” says Palmer.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |