UPDATED 22:21 EST / JULY 18 2019

SECURITY

Researchers trick AI-based antivirus into accepting malicious files

Cybersecurity researchers in Australia have found a way to trick an AI-based antivirus engine provided by BlackBerry Cylance into accepting malware as being legitimate, a discovery that may cast doubt on the methodology used by some companies in the burgeoning field of artificial intelligence-driven cybersecurity.

Detailed by Adi Ashkenazy and Shahar Zini from Skylight Cyber and first reported by Motherboard today, the method involved subverting the machine learning algorithm in Cylance’s endpoint PROTECT detection system. The method does not involve altering the code in malware to subvert the detection process but instead used a “global bypass” method to fool the Cylance algorithm.

The bypass method, in this case, involved taking strings from a nonmalicious file and adding it to the malicious file to trick the system into thinking it was benign. The method is said to work because Cylance’s machine-learning algorithm has been trained to favor a benign file, causing it to ignore malicious code if it sees strings from the benign file attached to a malicious file.

“As far as I know, this is a world-first, proven global attack on the machine learning mechanism of a security company,” Ashkenazy told Motherboard. “After around four years of super-hype [about AI], I think this is a humbling example of how the approach provides a new attack surface that was not possible with legacy [antivirus software].”

It may be the first public demonstration of a new threat vector that takes advantage of AI and machine learning, but Kevin Bocek, vice president of security strategy and threat intelligence at cybersecurity firm Venafi Inc., told SiliconANGLE that the general idea is not new.

“Security researchers have known that next-gen AV can be tricked for quite a while; in particular we know the code signing certificates allow a wide range of malware to evade detection,” Bocek explained. “This is the reason that Stuxnet — which also evaded AV detection — was so successful, and it is used in many malware campaigns today.”

The research, he added, serves as a reminder to security teams that cybercriminals have the capability to evade next-generation antivirus tools, so “we should all expect to see similar vulnerabilities in the future.”

Gregory Webb, chief executive officer of malware protection firm Bromium Inc., noted that the news raises doubts about the concept of categorizing code as “good” or “bad.”

“This exposes the limitations of leaving machines to make decisions on what can and cannot be trusted,” Webb said. “Ultimately, AI is not a silver bullet.”

Although he said AI can provide valuable insights and forecasts, “it is not going to be right every time and will always be fallible. If we place too much trust in such systems’ ability to know what is good and bad we will expose ourselves to untold risk – which if left unattended could create huge security blind spots, as is the case here.”

Photo: ShahanB/Wikimedia Commons

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU