MIT bets on deep learning to fight cybercrime

Despite best efforts and innovation, cybercrime is on the rise. MIT scientists and leading network defenders urge to explore deep learning to secure systems.

In the first quarter of 2022 alone, there were 404 publicly reported data breaches in the US. Ransomware breaches increased by 13% in a single year.

“No wonder an increasing number of organizations are beginning to explore how deep learning, and its ability to mimic the human brain, can outsmart and outpace the world’s fastest and most dangerous cyber threats,” MIT Technology Review said in its research paper produced together with cybersecurity company Deep Instinct.

MIT is looking at deep learning-driven malware prevention, hoping it could boost organizations in an innovation race against ransomware groups, enhancing their evasive capabilities, using sandbox detection or even adversarial artificial intelligence (AI.)

Deep learning is the most advanced form of AI technology that uses neural networks to instinctively and autonomously anticipate and prevent unknown malware and zero-day attacks.

Deep learning is also praised in the paper for addressing the limitations of machine learning by circumventing the need for highly skilled and experienced data scientists to feed a solution data sets manually.

“A deep learning model, specifically developed for cybersecurity, can absorb and process vast volumes of raw data to fully train the system. Once trained, these neural networks become autonomous and do not require constant human intervention. This combination of a raw data–based learning methodology and larger data sets means that deep learning is eventually able to accurately identify much more complex patterns than machine learning, at far faster speeds,” the paper reads.

Deep learning, MIT reminded, is powering autonomous vehicles like Tesla, speech recognition (SIRI,) recommendation engines (Netflix,) and linguistic tools (Google Translate.)

“Deep learning outshines any deny list, heuristic-based, or standard machine-learning approach,” Mirel Sehic, vice president and general manager for Honeywell Building Technologies (HBT,) is quoted in the paper. “The time it takes for a deep learning–based approach to detect a specific threat is much quicker than any of those elements combined.”

AI explained

Deep learning can predict the threat of adversarial AI. Adversarial machine learning tricks AI models by feeding them deceptive data.

“Essentially, adversaries intentionally exploit the way traditional machine learning–based solutions work by finding a bias that will bypass their detection capabilities and deceive

them into accepting malicious files as benign. However, because a deep learning network doesn’t rely on feature engineering, it’s more challenging for threat actors to create malware that can understand and exploit how the system works,” the paper reads.

Deep learning mimics the human brain's functionality and therefore can indicate intrusion by threat actors or malware with “unmatched speed and accuracy.”

“Doing so helps organizations better anticipate and prevent attacks before they have a chance to tarnish a brand’s reputation, erode share price, or lead to revenue losses,” the paper reads.

Deep learning has been around since the 1940s but was out of reach for many organizations due to the high cost and complexity of graphics processing units (GPUs.)

More from Cybernews:

Play Store apps spotted distributing malware for Android devices

Netflix launches “add a home” feature to fight password sharing

Hacker releases 4GB archive of internal Roblox employee data online

The metaverse fears: scams and detachment from reality

Fake crypto apps defraud investors to the tune of $42.7m

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked