Protect AI rolls out Guardian for management of machine learning model security
Artificial intelligence and machine learning cybersecurity startup Protect AI Inc. today announced the launch of Guardian, a new secure gateway that enables organizations to enforce security policies on machine learning models to prevent malicious code from entering their environment.
The service is based on ModelScan, an open-source tool from Protect AI that scans machine learning models to determine if they contain unsafe code. The company says Guardian brings together the best of Protect AI’s open-source offering to enable enterprise-level enforcement and management of model security. The service also extends coverage with proprietary scanning capabilities.
The service has been designed to address concerns about security in open-source foundation models on platforms such as Hugging Face. Although the models are often vital for powering a range of AI applications, they also introduce security risks, as the open exchange of files on these repositories can lead to the unintended spread of malicious software.
“ML models are new types of assets in an organization’s infrastructure, yet they are not scanned for viruses and malicious code with the same rigor as even a PDF file before they are used,” said Protect AI Chief Executive Ian Swanson. “There are thousands of models downloaded millions of times from Hugging Face on a monthly basis and these models can contain dangerous code. Guardian enables customers to take back control over open-source model security.”
Differing from other open-source alternatives on the market, Protect AI’s Guardian acts as a secure gateway, bridging machine learning development and deployment processes that use the Hugging Face and other model repositories. Guardian uses proprietary vulnerability scanners for open-source models.
Guardian also provides advanced access control features and dashboards, giving security teams control over model entry and comprehensive insights into model origins, creators and licensing. The service seamlessly integrates with existing security frameworks and complements Protect AI’s Radar for extensive AI and machine learning threat surface visibility in organizations.
Although Guardian is new, the underlying open-source technology designed by Protect AI, ModelScan, isn’t. Since being launched by Protect AI last year, ModelScan has been used to evaluate more than 400,000 models hosted on Hugging Face to identify unsafe models with the knowledge-base refreshed nightly. To date, more than 3,300 models have been found to have the ability to execute rogue code.
Protect AI is a venture capital-backed startup, having last raised $35 million in funding in July. Investors in the company include Evolution Equity Partners LLP, Salesforce Ventures LLC, Acrew Capital LP, Boldstart Ventures LLC, Knollwood Capital LLC and Pelion Ventures Partners LLC.
Image: Protect AI
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU