Life in Gurgaon

Here’s how Microsoft, Google are using Artificial Intelligence to fight hackers

Source :

0 2,247

Last year, Microsoft’s Azure security team detected suspicious activityin the cloud computing usage of a large retailer: One of the company’s administrators, who usually logs on from New York, was trying to gain entry from Romania. And no, the admin wasn’t on vacation. A hacker had broken in.

Microsoft quickly alerted its customer, andthe attack was foiled before the intruder got too far.

Chalk one upto a new generation of artificiallyintelligent software that adapts to hackers’ constantly evolving tactics. Microsoft, Alphabet Inc.’s Google, Inc. and various startups are moving away from solely using older “rules-based” technology designed to respond to specific kinds of intrusion and deploying machine-learning algorithmsthatcrunch massive amounts of dataon logins, behaviour and previous attacks to ferret out and stop hackers.

“Machine learning is a very powerful technique for security—it’s dynamic, while rules-based systems are very rigid,”says Dawn Song, a professor at the University of California at Berkeley’s Artificial Intelligence Research Lab. “It’s a very manual intensive process to change them, whereas machine learning is automated, dynamic and you can retrain it easily.”

Hackers are themselves famously adaptable, of course, so they too could harness machine learning to create fresh mischief and overwhelmthe new defences. For example, they could figure out how companies train their systems and use the data to evade or corrupt the algorithms. The big cloud services companies are painfully aware that the foeis a moving target but argue that the new technology will help tilt the balance in favor of the good guys.

“We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the total amount of damage and more quickly restore systems to a desirable state,” saysAmazon Chief Information Security Officer Stephen Schmidt. He acknowledges that it’s impossible to stop all intrusions but says his industry will “get incrementally better at protecting systems and make it incrementally harder for attackers.”

Before machine learning, security teams used blunter instruments. For example, if anyone based at headquarterstriedto log in from an unfamiliar locale, they were barred entry. Or spam emailsfeaturingvarious misspellings of the word “Viagra”were blocked. Such systems often work.

But they alsoflaglots of legitimate users—as anyoneprevented from using their credit card while on vacation knows. A Microsoft system designed to protect customers from fake logins had a 2.8% rate of false positives, according to Azure Chief Technology Officer Mark Russinovich.That might not sound like much but was deemed unacceptable since Microsoft’s larger customers can generatebillions of logins.

To do a better job of figuring out who is legit and who isn’t, Microsoft technology learns from the data of each company using it, customizing security to that client’s typical online behavior and history. Since rolling out the service, the company has managed to bring down thefalse positiverate to .001 percent. This is the system that outed the intruder in Romania.

Training thesesecurity algorithms falls topeople like Ram Shankar Siva Kumar, a Microsoft managerwho goes by the title of Data Cowboy. Siva Kumar joined Microsoft six years ago from Carnegie Mellon after accepting a second-round interviewbecause his sister was a fan of “Grey’s Anatomy,” themedical drama set inSeattle. He manages a team of about 18 engineers who develop the machine learning algorithms and then make sure they’re smart and fast enough to thwart hackers and work seamlessly with the software systems ofcompanies paying big bucks for Microsoftcloud services.

Siva Kumaris one of the people who gets the call when the algorithms detect anattack. Hehas been woken in the middle of the night, only to discover that Microsoft’s in-house “red team”of hackers were responsible. (They bought him cake to compensate for lost sleep.

The challenge is daunting.Millions of people log into Google’s Gmail each day alone. “The amount of data we need to look atto make sure whether this is you or an impostor keeps growing at a rate that is too large for humans to write rules one by one,”says Mark Risher, a product management director who helps prevent attacks on Google’s customers.