TechNews Pictorial PriceGrabber Video Fri Mar 29 09:42:59 2024

0


How AI is stopping criminal hacking in real time
Source: John Brandon


Almost every day, there’s news about a massive data leak -- a breach at Yahoo that reveals millions of user accounts, a compromise involving Gmail phishing scams. Security professionals are constantly moving the chess pieces around, but it can be a losing battle.

Yet, there is one ally that has emerged in recent years. Artificial intelligence can stay vigilant at all times, looking for patterns in behavior and alerting you to a new threat.

While AI is not anywhere close to being perfect, experts tell CSO that machine learning, adaptive intelligence, and massive data models that can spot hacking much faster than any human are here to help.

“There are some groundbreaking AI solutions built around cyber security analytics,” says George Avetisov, the CEO and Cofounder of biometric security company HYPR.

“The processes behind threat intelligence and breach discovery have remained incredibly slow due to the need for a human element. AI is transforming the speed at which threats are identified and attacks are mitigated by greatly increasing the speed at which such intelligence is processed.”

According to Avetisov, the big change has to do with removing the rules-based engine that have been in use at larger companies for decades. An AI adapts and learns about threats in real-time. They can analyze large data sets that are often fragmented and overlap with one another.

In this scenario, he says, the role of a human operator is to weed out false positives and, to an ever-increasing degree, make sure the data sets fed into an AI engine are accurate and robust. In some ways, it could be said that an AI is only as intelligent as the data it analyzes. What’s interesting is that an AI can also predict behavior based on current data sets, adapting your own security infrastructure based on what could potentially lead to a breach.
Novel approaches


In the future, AI could be added to services we all rely on each day. In Gmail, for example, when you receive an email that looks legitimate, an AI can scan countless variables -- such as the originating IP address, location data, the word choice and phrasing in the email, and other factors -- and alert you to a phishing scam.

One of the most interesting uses for AI in blocking attacks has to do with classification. Mark Testoni, the president and CEO of enterprise security company SAP NS2, told CSO that an AI can quantify the level of threat in ways that would normally require much more human effort.

“An AI has supervised learning capabilities using neural networks for entity and pattern recognition for intrusion detection systems and event forensics applications,” says Testoni. “They can classify entities and events to reduce mean time to identification of problems, and analyze the behavior behind the attacks. For example, what does the attacker want, how will it affect my organization, what aspects of my business are at most risk and the impact analysis of the attack itself?”

Another area of focus: Having an AI inspect all network traffic. Today, it can be difficult to block a harmful email or attachment because there may not be a rule about the data yet or the harmful agent has not been detected yet. Forensic security tends to look at the damage after it takes place. However, as Nathan Wenzler, the chief security strategist at AsTech Consulting, explained, an AI can ingest the data, look for patterns, and block network traffic in real-time.

Fred Wilmot, the interim CEO/CTO of threat detection company PacketSled, made an interesting point about all of these AI advancements. In the coming months and years, security professionals will rely more on machine learning, and their role might change to become more like AI engineers who create the learning models. For now, the AI is still not mature enough, especially for the fraud detection and mitigation that takes place in the financial sector.
The dark side of using AI to fight hacking

Avetisov did mention one dark side. While security professionals can rely on AI to help block malware attacks or other intrusions, hackers are also leaning on AI. It’s a counter-offensive, because the hackers are using machine learning as well to find weak endpoints.

“Hackers are just as sophisticated as the communities that develop capability to defend themselves against hackers,” says SAP NS2’s Testoni. “They are using the same techniques, such as intelligent phishing, analyzing behavior of potential targets to determine what type of attack to use,    “smart malware” that knows when it is being watched so it can hide.”

“We've seen more and more attacks over the years take on morphing characteristics, making them harder to predict and defend against,” says Wenzler from AsTech Consulting. “Now, leveraging more machine learning concepts, hackers can build malware that can learn about a target's network and change up its attack methodology on the fly.”

Neill Feather, the president of website security company Sitelock, did note that the AI programming someone might use for criminal hacking is more complex, and there are higher costs involved. The incentive will remain as long as the unethical AI leads to more breaches.

In the end, the cyber war will continue -- quite possibly between the AI bots.
Robotics
The real concerns about artificial intelligence    (2:21)

This story, "How AI is stopping criminal hacking in real time" was originally published by CSO.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |