UPDATED 15:31 EDT / NOVEMBER 01 2016

NEWS

Symantec unveils new ‘endpoint’ security system powered by machine learning

Mountain View-based cyber security firm Symantec Corp., best known for its Norton antivirus suite, has just unveiled its newest security system for “endpoint” devices such as laptops and smartphones.

Endpoint Security 14 is a major leap forward in security technology, the company says, thanks to its use of machine learning to fight exploits and coordinated attacks.

According to Symantec, machine learning allows Endpoint Security to recognize the patterns that could signify an attack and actively work to mitigate those threats in real time. The company says its software has a 99.9 percent efficacy rate with a low number of false positives, although it did not share what exactly that low number is. Symantec also noted that thanks to its increased reliance on the cloud of threat lookups, Endpoint Security 14 now boasts a 70 percent smaller footprint than its predecessor, making daily definition updates both smaller and faster.

“Symantec Endpoint Protection 14 is a major leap forward in endpoint protection, delivering the latest innovations in endpoint security on a single platform and from a security company you can trust,” Mike Fey, president and chief operating officer at Symantec, said in statement.

The cyber security arms race

Machine learning and AI may offer a huge boost to computer security, but those same tools could also be used for the opposite purpose: to track down new vulnerabilities and exploit them before they can be plugged up. A common fear when it comes to AI is the idea that a computer could spontaneously become self-aware and declare war on humanity, but a more realistic a near-term threat is the possibility that a malicious AI could be created intentionally as a sort of super-powered computer virus.

Earlier this year, researchers at the University of Louisville in Kentucky published a research paper called “Unethical Research: How to Create a Malevolent Artificial Intelligence,” which outlines the conditions and environment that could lead to the creation of a malicious AI, either accidentally or intentionally. The researchers concluded that a lack of oversight on the AI research community is a particularly important risk factor, along with the creation of closed-source AI software that is understood by only a select few.

Of course, these factors are primarily relevant only for large companies conducting AI research such as Google or Facebook, as these high-powered tools are beyond the capabilities of a lone developer — for now.

Image courtesy of Symantec

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU