TechNews Pictorial PriceGrabber Video Wed Apr 17 21:11:10 2024

0


Artificial intelligence and quantum computing aid cyber crime fight
Source: Shannon Bond


Please use the sharing tools found via the email icon at the top of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service.

You enter your password incorrectly too many times and get locked out of your account; your colleague sets up access to her work email on a new device; someone in your company clicks on an emailed “Google Doc” that is actually a phishing link — initially thought to be how the recent spread of the WannaCry computer worm began.

Each of these events leaves a trace in the form of information flowing through a computer network. But which ones should the security systems protecting your business against cyber attacks pay attention to and which should they ignore? And how do analysts tell the difference in a world that is awash with digital information?

The answer could lie in human researchers tapping into artificial intelligence and machine learning, harnessing both the cognitive power of the human mind and the tireless capacity of a machine. Not only will the combination of person and device build stronger defences, their ability to protect networks should also improve over time.

A large company sifts through 200,000 so-called “security events” every day to figure out which present real threats, according to Caleb Barlow, vice-president of threat intelligence for IBM Security. These include anything from staff forgetting their passwords and being locked out of the system, to the signatures of devices used to access networks changing, to malware attempting to gain entry to corporate infrastructure. “A level of rapid-fire triage is desperately needed in the security industry,” Mr Barlow says.



The stakes for businesses are high. Last year, 4.2bn records were reported to have been exposed globally in more than 4,000 security breaches, revealing email addresses, passwords, social security numbers, credit card and bank accounts, and medical data, according to analysis by Risk Based Security, a consultancy.

International Data Corporation, a US market research company, forecasts businesses will spend more than $100bn by 2020 protecting themselves from hacking, up from about $74bn in 2016.

Artificial intelligence can improve threat detection, shorten defence response times and refine techniques for differentiating between real efforts to breach security and incidents that can safely be ignored.

“Speed matters a lot. [Executing an attack] is an investment for the bad guys,” Mr Barlow says. “They’re spending money. If your system is harder to get into than someone else’s, they are going to move on to something that’s easier.”

Daniel Driver of Chemring Technology Solutions, part of the UK defence group, says: “Before artificial intelligence, we’d have to assume that a lot of the data — say 90 per cent — is fine. We only would have bandwidth to analyse this 10 per cent.

“The AI mimics what an analyst would do, how they look at data, how and why they make decisions . . . It’s doing a huge amount of legwork upfront, which means we can focus our analysts’ time. That saves human labour, which is far more expensive than computing time.”

IBM is also applying AI to security in the form of its Watson “cognitive computing” platform. The company has taught Watson to read through vast quantities of security research. Some 60,000 security-related blog posts are published every month and 10,000 reports come out every year, IBM estimates. “The juicy information is in human-readable form, not machine data,” Mr Barlow says.

The company has about 50 customers using Watson as part of its security intelligence and analytics platform. The program learns from every piece of information it takes in.


What to expect, who to tell and how to limit the damage

“It went from literally being a grade-school kid. We had to teach it that a bug is not an insect, it’s a software defect. A back door doesn’t go into a house, it's a vulnerability. Now it’s providing really detailed insights on particular [threats] and how their campaigns are evolving. And that’s just in a matter of months,” Mr Barlow says. “The more it learns, the faster it gets smarter.”

IBM says Watson performs 60 times faster than a human investigator and can reduce the time spent on complex analysis of an incident from an hour to less than a minute.

Another even more futuristic technology could make Watson look as slow as humans: quantum computing. While machine learning and AI speed up the laborious process of sorting through data, the aim is that quantum computing will eventually be able to look at every data permutation simultaneously. Computers represent data as ones or zeros. But Mr Driver says that in a quantum computer these can be: “both [zeros and ones] and neither at the same time. It can have super positions. It means we can look through everything and get information back incredibly quickly.

“The analogy we like to use is that of a needle in a haystack. A machine can be specially made to look for a needle in a haystack, but it still has to look under every piece of hay. Quantum computing means, I’m going to look under every piece of hay simultaneously and find the needle immediately.”

He estimates that quantum computing for specific tasks will be more widely available over the next three to five years. “On this scale, the technology is still a way off, but there are companies that are developing it.”

Meanwhile, Nvidia is racking up rising sales for its GPUs as neural network training engines. Simultaneously, it is morphing its architecture to better handle such jobs.

Google claims that neither its massive clusters of x86 CPUs nor Nvidia’s GPUs are adequate. So it has rolled out two versions of its own accelerator, the TPU.

“This is Compute 2.0; it is absolutely a new world of computing,” said Nigel Toon, chief executive of Graphcore. “Google eventually will use racks and racks of TPUs and almost no CPUs because 98 percent of its revenues come from search,” a good application for machine learning.

Eventually, machine-learning chips will appear in a broad range of embedded systems. With 18 million cars sold a year compared to 10 million servers, “self-driving cars could be a bigger market than the cloud for this technology, and it’s a market that never existed before,” said Toon.

The shared vision is an AI processor that can handle both training and inference for today’s rainbow of neural networks — and maybe even some emerging self-learning techniques. They need to deliver performance through massive parallelism, yet be power-efficient and easy to program.

Even the basic math for the job is a subject of lively debate. Toon believes that a mix of 16-bit floating-point multiplication with 32-bit accumulates delivers optimal precision with minimal errors.

That’s the approach that the new tensor cores in Nvidia’s Volta use, as well as the competing high-end chip that Graphcore will sample to early partners in October. The startup is focused on one big chip using novel memories and interconnects in and out of the chip to link cells and clusters.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |