

Software supply chain company JFrog Ltd. today announced a number of new releases aimed at bringing greater trust, transparency and security to the world of artificial intelligence.
The company announced three key updates designed to help enterprises safely deploy machine learning models into production, address rising threats in AI supply chains and simplify the delivery of generative AI applications at scale.
Leading the list of announcements is a new partnership with Hugging Face Inc., the world’s largest repository of open-source machine learning models. Under the partnership, JFrog will provide advanced security scanning across all models hosted on the Hugging Face Hub, with a “JFrog Certified” badge highlighting models that pass verification.
The integration seeks to address growing concerns over the security of machine learning supply chains following the discovery of malicious models on the platform in early 2024. With JFrog scanning technology embedded directly into the Hub, Hugging Face users will gain greater transparency into potential threats like backdoors, remote code execution and model serialization attacks.
JFrog said its analysis has already identified 25 previously undetected malicious models, highlighting the need to secure open-source machine learning assets. With the new integration, scans will run continuously, allowing developers and data scientists to assess the safety of models before downloading or deploying them into production environments.
The second announcement today sees JFrog teaming up with Nvidia Corp. to integrate its platform with Nvidia Inference Microservices, part of the Nvidia AI Enterprise suite. The collaboration is designed to provide a unified, end-to-end solution for securely deploying GPU-optimized machine learning models and large language models into production.
The integration will allow enterprises to manage and deploy pre-approved models, such as Meta Platform Inc.’s Llama 3 and Mistral AI, with full security, governance and traceability built into their existing DevSecOps workflows. JFrog says the approach helps reduce the complexity of scaling generative AI projects while maintaining compliance with evolving regulatory requirements.
Through the use of JFrog Artifactory as a central hub for managing software components, organizations can track, secure and optimize the delivery of AI workloads alongside traditional applications. The idea is to ensure continuous security scanning, version control and automated policy enforcement across every stage of the AI model lifecycle.
The integration also addresses a key barrier to enterprise AI adoption by making it easier to move from experimental projects to reliable, large-scale deployments. The solution is aimed at supporting production-grade performance with enterprise-level peace of mind while offering flexible options for multicloud, on-premises and air-gapped environments.
The final announcement is JFrog ML, a new MLOps solution designed to unify machine learning development with traditional DevSecOps practices. The platform provides an end-to-end framework for securely managing, deploying and monitoring AI models alongside other software artifacts.
JFrog ML helps organizations apply the same governance, traceability and security controls across their entire software supply chain by treating machine learning models like first-class software packages. The approach is aimed at reducing friction among data science, engineering and operations teams, making it easier to move models from experimentation to production.
The new offering includes integrations with Hugging Face, AWS SageMaker, MLflow and Nvidia NIM to support a wide range of workflows from training to deployment. JFrog ML also features built-in capabilities for dataset management, feature stores and automated model serving at scale.
“As the demand for AI-powered applications continues to grow, so do the concerns around use of open source ML models and platforms,” said JFrog co-founder and Chief Executive Shlomi Ben Haim. “JFrog ML combines a superior, straightforward and hassle-free user experience for bringing models to production.”
THANK YOU